src
stringlengths
15.2k
66.8k
tgt
stringlengths
1.1k
4.65k
text
stringlengths
16.8k
70k
__index_level_0__
int64
0
42
professor a: ! maybe it 's just , how many t u how many times you crash in a day . phd g: or maybe it 's once you ' ve done enough meetings it wo n't crash on you anymore . professor a: that 's that 's great . do we have an agenda ? liz and andreas ca n't sh ca n't , ca n't come . grad b: i have no idea but got it a few minutes ago . right when you were in my office it arrived . grad b: so , does anyone have any a agenda items other than me ? i actually have one more also which is to talk about the digits . professor a: , right , so i was just gon na talk briefly about the nsf itr . professor a: , i wo n't say much , but then , you said wanna talk about digits ? grad b: i have a short thing about digits and then i wanna talk a little bit about naming conventions , although it 's unclear whether this is the right place to talk about it . so maybe just talk about it very briefly and take the details to the people who for whom it 's relevant . professor a: if we , we should n't add things in just to add things in . i ' m actually pretty busy today , so if we can we grad b: so the only thing i wanna say about digits is , we are done with the first test set . there are probably forms here and there that are marked as having been read that were n't really read . so i wo n't really know until i go through all the transcriber forms and extract out pieces that are in error . so i wa . two things . the first is what should we do about digits that were misread ? my opinion is , we should just throw them out completely , and have them read again by someone else . , the grouping is completely random , grad b: so it 's perfectly fine to put a group together again of errors and have them re - read , just to finish out the test set . grad b: , the other thing you could do is change the transcript to match what they really said . so those are the two options . professor a: but there 's often things where people do false starts . i know i ' ve done it , where i say a grad b: what the transcribers did with that is if they did a correction , and they eventually did read the right string , you extract the right string . phd g: , you 're talking about where they completely read the wrong string and did n't correct it ? postdoc f: , and s and you 're talking string - wise , you 're not talking about the entire page ? grad b: and so the two options are change the transcript to match what they really said , but then the transcript is n't the aurora test set anymore . i do n't think that really matters because the conditions are so different . and that would be a little easier . professor a: , i would , tak do the easy way , it it 's kinda , wh who knows what studies people will be doing on speaker - dependent things professor a: so that 's a couple hours of , speech , probably . which is a reasonable test set . grad b: and , jane , i do have a set of forms which you have copies of somewhere . grad b: , i was just wond i had all of them back from you . and then the other thing is that , the forms in front of us here that we 're gon na read later , were suggested by liz grad b: because she wanted to elicit some different prosodics from digits . and so , wanted people to , take a quick look at the instructions grad b: and the way it wa worked and see if it makes sense and if anyone has any comments on it . professor a: i see . and the decision here , was to continue with the words rather than the numerics . grad b: , yes , although we could switch it back . the problem was o and zero . although we could switch it back and tell them always to say " zero " or always to say " o " . professor a: or neither . but it 's just two thing ways that you can say it . professor a: right ? that 's the only thought i have because if you t start talking about these , u tr she 's trying to get at natural groupings , but it there 's nothing natural about reading numbers this way . grad b: the the problem also is she did want to stick with digits . i ' m speaking for her since she 's not here . but , the other problem we were thinking about is if you just put the numerals , they might say forty - three instead of four three . postdoc f: , if there 's space , though , between them . , you can with when you space them out they do n't look like , forty - three anymore . grad b: , she and i were talking about it , and she felt that it 's very , very natural to do that chunking . professor a: she 's right . it 's it 's a different problem . it 's a it 's an interesting problem , we ' ve done with numbers before , and sometimes people if you say s " three nine eight one " sometimes people will say " thirty - nine eighty - one " or " three hundred eighty - nine one " , or i do n't think they 'd say that , but th professor a: but , th thirty - eight ninety - one is probably how they 'd do it . grad b: so . , this is something that liz and i spoke about and , since this was something that liz asked for specifically , we need to defer to her . professor a: - . ok . , we 're probably gon na be collecting meetings for a while and if we decide we still wanna do some digits later we might be able to do some different ver different versions , professor a: but this is the next suggestion , so . ok , so e l i , let me , get my short thing out about the nsf . i sent this actually this is maybe a little side thing . , i sent to what we had , in some previous mail , as the right joint thing to send to , which was " m mtg rcdr hyphen joint " . grad b: it 's that 's because they set the one up at uw that 's not on our side , that 's on the u - dub side . and so u - uw set it up as a moderated list . grad b: and , i have no idea whether it actually ever goes to anyone so you might just wanna mail to mari professor a: no no , th i got , little excited notes from mari and jeff and so on , grad b: so the moderator actually did repost it . cuz i had sent one earlier actually the same thing happened to me i had sent one earlier . the message says , " you 'll be informed " and then i was never informed but i got replies from people indicating that they had gotten it , so . it 's just to prevent spam . professor a: so o ok . , anyway , i everybody here are y are you are on that list , right ? so you got the note ? ok . so this was , a , proposal that we put in before on more higher level , issues in meetings , from i higher level from my point of view . , and , meeting mappings , and , so is i for it was a proposal for the itr program , information technology research program 's part of national science foundation . it 's the second year of their doing , these grants . they 're they 're a lot of them are some of them anyway , are larger grants than the usual , small nsf grants , and . so , they 're very competitive , and they have a first phase where you put in pre - proposals , and we , got through that . and so th the next phase will be we 'll actually be doing a larger proposal . and i ' m i hope to be doing very little of it . and , which was also true for the pre - proposal , there 'll be bunch of people working on it . so . grad b: i that 's a good thing cuz that way i got my papers done early . professor a: my favorite is was when one reviewer says , " , this should be far more detailed " , and the nex the next reviewer says , " , there 's way too much detail " . grad b: or " this is way too general " , and the other reviewer says , " this is way too specific " . this is way too hard , way too easy . grad b: it sounded like they the first gate was pretty easy . is that right ? that they did n't reject a lot of the pre - proposals ? professor a: i should go back and look . i did n't i do n't think that 's true . professor a: but they have to weed out enough so that they have enough reviewers . so , , maybe they did n't r weed out as much as usual , but it 's usually a pretty but it . it 's it 's certainly not i ' m that it 's not down to one in two of what 's left . i ' m it 's , professor a: there 's different numbers of w awards for different size they have three size grants . this one there 's , see the small ones are less than five hundred thousand total over three years and that they have a fair number of them . and the large ones are , boy , i forget , more than a million and a half , more than two million like that . and and we 're in the middle category . we 're , , i forget what it was . but , i do n't remember , but it 's pr probably along the li i could be wrong on this , but probably along the lines of fifteen or that they 'll fund , or twenty . when they do you do how many they funded when they f in chuck 's , that he got last year ? grad b: it was smaller , that it was like four or five , was n't it ? professor a: last time they just had two categories , small and big , and this time they came up with a middle one , so it 'll there 'll be more of them that they fund than of the big . phd g: if we end up getting this , what will it mean to icsi in terms of , w wh where will the money go to , what would we be doing with it ? professor a: , it i none of it will go for those yachts that we ' ve talking about . grad b: it 's go higher level than we ' ve been talking about for meeting recorder . professor a: the other things that we have , been working on with , the c with communicator , especially with the newer things with the more acoustically - oriented things are lower level . and , this is dealing with , mapping on the level of , the conversation of mapping the conversations professor a: to different planes . so . but , . so it 's all that none of us are doing right now , or none of us are funded for , so it 's it would be new . phd g: so assuming everybody 's completely busy now , it means we 're gon na hafta , hire more students , or , something ? professor a: there 's evenings , and there 's weekends , there would be new hires , and there would be expansion , but , also , there 's always for everybody there 's always things that are dropping off , grants that are ending , or other things that are ending , so , professor a: there 's a continual need to bring in new things . but but there definitely would be new , students , professor a: we got we have , two of them are two in the c there 're two in the class already here , and then and , then there 's a third who 's doing a project here , who , but he wo n't be in the country that long , maybe another will end up . actually there is one other guy who 's looking that 's that guy , jeremy ? . anyway , that 's all i was gon na say is that 's , that 's and we 're sorta preceding to the next step , and , it 'll mean some more work , , in march in getting the proposal out , and then , it 's , we 'll see what happens . , the last one was that you had there , was about naming ? grad b: it just , we ' ve been cutting up sound files , in for ba both digits and for , doing recognition . and liz had some suggestions on naming and it just brought up the whole issue that has n't really been resolved about naming . so , one thing she would like to have is for all the names to be the same length so that sorting is easier . , same number of characters so that when you 're sorting filenames you can easily extract out bits and pieces that you want . and that 's easy enough to do . and i do n't think we have so many meetings that 's a big deal just to change the names . so that means , instead of calling it " mr one " , " mr two " , you 'd call it " mrm zero one " , mrm zero two , things like that . just so that they 're all the same length . postdoc f: but , when you , do things like that you can always as long as you have , you can always search from the beginning or the end of the string . grad b: alright , so we have th we 're gon na have the speaker id , the session , information on the microphones , grad b: and so if each one of those is a fixed length , the sorting becomes a lot easier . grad d: she wanted to keep them the same lengths across different meetings also . so like , the nsa meeting lengths , all filenames are gon na be the same length as the meeting recorder meeting names ? grad b: and as i said , the it 's we just do n't have that many that 's a big deal . grad b: and so , , at some point we have to take a few days off , let the transcribers have a few days off , make no one 's touching the data and reorganize the file structures . and when we do that we can also rationalize some of the naming . postdoc f: i would think though that the transcribe the transcripts themselves would n't need to have such lengthy names . so , you 're dealing with a different domain there , and with start and end times and all that , and channels and , grad b: right . so the only thing that would change with that is just the directory names , grad b: i would change them to match . so instead of being mr one it would be mrm zero one . but i do n't think that 's a big deal . grad b: so for m the meetings we were thinking about three letters and three numbers for meeting i ds . , for speakers , m or f and then three numbers , for , and , that also brings up the point that we have to start assembling a speaker database so that we get those links back and forth and keep it consistent . and then , the microphone issues . we want some way of specifying , more than looking in the " key " file , what channel and what mike . what channel , what mike , and what broadcaster . or i how to s say it . so with this one it 's this particular headset with this particular transmitter w as a wireless . and that one is a different headset and different channel . and so we just need some naming conventions on that . and , that 's gon na become especially important once we start changing the microphone set - up . we have some new microphones that i 'd like to start trying out , once i test them . and then we 'll need to specify that somewhere . so i was just gon na do a fixed list of , microphones and types . so , as i said professor a: , since we have such a short agenda list i wi i will ask how are the transcriptions going ? postdoc f: the the news is that i ' ve i s so in s so i ' ve switched to start my new sentence . i switched to doing the channel - by - channel transcriptions to provide , the , tighter time bins for partly for use in thilo 's work and also it 's of relevance to other people in the project . and , i discovered in the process a couple of interesting things , which , one of them is that , it seems that there are time lags involved in doing this , , using an interface that has so much more complexity to it . and i and i wanted to maybe ask , chuck to help me with some of the questions of efficiency . maybe i was thinking maybe the best way to do this in the long run may be to give them single channel parts and then piece them together later . and i have a script , piece them together . , so it 's like , i know that take them apart and put them together and i 'll end up with the representation which is where the real power of that interface is . and it may be that it 's faster to transcribe a channel at a time with only one , sound file and one , set of , utterances to check through . professor a: i ' m a little confused . that one of the reason we thought we were so much faster than , the other transcription , thing was that we were using the mixed file . postdoc f: , yes . ok . but , with the mixed , when you have an overlap , you only have a choice of one start and end time for that entire overlap , which means that you 're not tightly , tuning the individual parts th of that overlap by different speakers . so someone may have only said two words in that entire big chunk of overlap . and for purposes of , things like , so things like training the speech - nonspeech segmentation thing . th - it 's necessary to have it more tightly tuned than that . and w and , is a it would be wonderful if , it 's possible then to use that algorithm to more tightly tie in all the channels after that but , , i ' ve th the so , i exactly where that 's going at this point . but m i was experimenting with doing this by hand and i really do think that it 's wise that we ' ve had them start the way we have with , m y working off the mixed signal , having the interface that does n't require them to do the ti , the time bins for every single channel at a t , through the entire interaction . , i did discover a couple other things by doing this though , and one of them is that , once in a while a backchannel will be overlooked by the transcriber . as you might expect , because when it 's a b backchannel could happen in a very densely populated overlap . and if we 're gon na study types of overlaps , which is what i wanna do , an analysis of that , then that really does require listening to every single channel all the way through the entire length for all the different speakers . now , for only four speakers , that 's not gon na be too much time , but if it 's nine speakers , then that i that is more time . so it 's li , wondering it 's like this it 's really valuable that thilo 's working on the speech - nonspeech segmentation because maybe , we can close in on that wi without having to actually go to the time that it would take to listen to every single channel from start to finish through every single meeting . phd e: , but those backchannels will always be a problem . especially if they 're really short and they 're not very loud and so it can it will always happen that also the automatic s detection system will miss some of them , so . postdoc f: so then , maybe the answer is to , listen especially densely in places of overlap , just so that they 're not being overlooked because of that , and count on accuracy during the sparser phases . cuz there are large s spaces of the that 's a good point . there are large spaces where there 's no overlap . someone 's giving a presentation , or whatever . that 's that 's a good thought . and , let 's see , there was one other thing i was gon na say . it 's really interesting data to work with , i have to say , it 's very enjoyable . i really , not a problem spending time with these data . really interesting . and not just because i ' m in there . no , it 's real interesting . professor a: , it 's a short meeting . , you 're still in the midst of what you 're doing from what you described last time , i assume , phd c: i have n't results , yet but , i ' m continue working with the mixed signal now , after the last experience . phd c: and and i ' m tried to , adjust the to improve , an harmonicity , detector that , i implement . but i have problem because , i get , , very much harmonics now . , harmonic possi possible harmonics , and now i ' m trying to find , some a , of h of help , using the energy to distinguish between possible harmonics , and other fre frequency peaks , that , corres not harmonics . and , i have to talk with y with you , with the group , about the instantaneous frequency , because i have , an algorithm , and , i get , mmm , t results similar results , like , the paper , that i am following . but , the rules , that , people used in the paper to distinguish the harmonics , is does n't work . and i not that i , the way o to ob the way to obtain the instantaneous frequency is right , or it 's not right . i have n't enough file feeling to distinguish what happened . professor a: , i 'd like to talk with you about it . if if , if i do n't have enough time and y you wanna discuss with someone else some someone else besides us that you might want to talk to , might be stephane . phd g: is is this the algorithm where you hypothesize a fundamental , and then get the energy for all the harmonics of that fundamental ? phd c: no . i do n't proth process the fundamental . i , ehm i calculate the phase derivate using the fft . and the algorithm said that , if you change the , nnn the x - the frequency " x " , using the in the instantaneous frequency , you can find , how , in several frequencies that proba probably the harmonics , the errors of peaks the frequency peaks , move around these , frequency harmonic the frequency of the harmonic . and , if you compare the instantaneous frequency , of the , continuous , , filters , that , they used , to get , the instantaneous frequency , it probably too , you can find , that the instantaneous frequency for the continuous , the output of the continuous filters are very near . and in my case i in equal with our signal , it does n't happened . professor a: . i 'd hafta look at that and think about it . it 's it 's i have n't worked with that either so i ' m not the way the simple - minded way i suggested was what chuck was just saying , is that you could make a sieve . , y you actually say that here is let 's let 's hypothesize that it 's this frequency or that frequency , maybe you could use some other cute methods to , short cut it by , making some guesses , but uh , i would , you could make some guesses from , from the auto - correlation but then , given those guesses , try , only looking at the energy at multiples of the of that frequency , and see how much of the take the one that 's maximum . call that the phd c: but , i know many people use , low - pass filter to get , the pitch . phd g: but i but the harmonics are gon na be , , i what the right word is . they 're gon na be dampened by the , vocal tract , right ? the response of the vocal tract . and so just looking at the energy on those at the harmonics , is that gon na ? phd g: i m what you 'd like to do is get rid of the effect of the vocal tract . right ? and just look at the signal coming out of the glottis . professor a: but i do n't need if you need to get rid of it . that 'd be but i if it 's ess if it 's essential . , cuz the main thing is that , you 're trying wha what are you doing this for ? you 're trying distinguish between the case where there is , where there are more than , where there 's more than one speaker and the case where there 's only one speaker . so if there 's more than one speaker , i you could i you 're so you 're not distinguished between voiced and unvoiced , so , i if you do n't care about that see , if you also wanna just determine if you also wanna determine whether it 's unvoiced , then you want to look at high frequencies also , because the f the fact that there 's more energy in the high frequencies is gon na be an ob obvious cue that it 's unvoiced . but , i but , other than that i as far as the one person versus two persons , it would be primarily a low frequency phenomenon . and if you looked at the low frequencies , yes the higher frequencies are gon na there 's gon na be a spectral slope . the higher frequencies will be lower energy . but so what . that 's w phd c: i will prepare for the next week , all my results about the harmonicity and will try to come in and to discuss here , because , i have n't enough feeling to u many time to understand what happened with the with , so many peaks , , and i see the harmonics there many time but , there are a lot of peaks , that , they are not harmonics . i have to discover what is the w the best way to c to use them professor a: , but i do n't think you can you 're not gon na be able to look at every frame , so i really thought that the best way to do it , and i ' m speaking with no experience on this particular point , but , my impression was that the best way to do it was however you you ' ve used instantaneous frequency , whatever . however you ' ve come up you with your candidates , you wanna see how much of the energy is in that as coppo as opposed to all of the all the total energy . and , if it 's voiced , i so y maybe you do need a voiced - unvoiced determination too . but if it 's voiced , and the , e the fraction of the energy that 's in the harmonic sequence that you 're looking at is relatively low , then it should be then it 's more likely to be an overlap . phd c: is height . this this is the idea i had to compare the ratio of the energy of the harmonics with the , total energy in the spectrum and try to get a ratio to distinguish between overlapping and speech . professor a: but you 're looking a y you 're looking at let 's take a second with this . you 're looking at f at the phase derivative , in , what domain ? this is in bands ? or or phd c: it 's a it 's a o i w the band is , from zero to four kilohertz . and i ot i phd c: . i u m t i used two m two method two methods . , one , based on the f , ftt . to fft to obtain the or to study the harmonics from the spectrum directly , and to study the energy and the multiples of frequency . and another algorithm i have is the in the instantaneous frequency , based on the fft to calculate the phase derivate in the time . , n the d i have two algorithms . but , in m i in my opinion the instantaneous frequency , the behavior , was th it was very interesting . because i saw , how the spectrum concentrate , around the harmonic . but then when i apply the rule , of the in the instantaneous frequency of the ne of the continuous filter in the near filter , the rule that , people propose in the paper does n't work . and i why . professor a: but the instantaneous frequency , would n't that give you something more like the central frequency of the , of the where most of the energy is ? , if you does i does it why would it correspond to pitch ? phd c: i get the spectrum , and i represent all the frequency . and when ou i obtained the instantaneous frequency . and i change the @ , using the instantaneous frequency , here . professor a: , so you scale you s you do a scaling along that axis according to instantaneous phd c: because when , when i i use these frequency , the range is different , and the resolution is different . i observe more or less , thing like this . the paper said that , these frequencies are probably , harmonics . but , they used , a rule , based in the because to calculate the instantaneous frequency , they use a hanning window . and , they said that , if these peak are , harmonics , f of the contiguous , w , filters are very near , or have to be very near . but , phh ! i do n't i don i and i what is the distance . and i tried to put different distance , to put difference , length of the window , different front sieve , pfff ! and i not what happened . professor a: ok , i ' m not following it enough . i 'll probably gon na hafta look at the paper , but which i ' m not gon na have time to do in the next few days , but i ' m curious about it . postdoc f: i did i it did occur to me that this is , the return to the transcription , that there 's one third thing i wanted to ex raise as a to as an issue which is , how to handle breaths . so , i wanted to raise the question of whether people in speech recognition want to know where the breaths are . and the reason i ask the question is , aside from the fact that they 're very time - consuming to encode , the fact that there was some i had the indication from dan ellis in the email that i sent to you , and about , that in principle we might be able to , handle breaths by accessi by using cross - talk from the other things , be able that in principle , maybe we could get rid of them , so maybe and i was i , we had this an and i did n't could n't get back to you , but the question of whether it 'd be possible to eliminate them from the audio signal , which would be the ideal situation , professor a: we - see , we 're dealing with real speech and we 're trying to have it be as real as possible and breaths are part of real speech . postdoc f: , except that these are really truly , ther there 's a segment in o the one i did n the first one that i did for i for this , where truly w we 're hearing you breathing like as if we 're you 're in our ear , and it 's like i y i , breath is natural , but not postdoc f: except that we 're trying to mimic , i see what you 're saying . you 're saying that the pda application would have , have to cope with breath . grad b: but more people than just pda users are interested in this corpus . so so mean you 're right grad b: but we do n't wanna w remove it from the corpus , in terms of delivering it because the people will want it in there . professor a: i right . if if it gets in the way of what somebody is doing with it then you might wanna have some method which will allow you to block it , but you it 's real data . you do n't wanna b but you do n't professor a: if s , if there 's a little bit of noise out there , and somebody is talking about something they 're doing , that 's part of what we accept as part of a real meeting , even and we have the f the fan and the in the projector up there , and , this is it 's this is actual that we wanna work with . postdoc f: this is in very interesting because i it has a i it shows very clearly the contrast between , speech recognition research and discourse research because in discourse and linguistic research , what counts is what 's communit communicative . and breath , everyone breathes , they breathe all the time . and once in a while breath is communicative , but r very rarely . ok , so now , i had a discussion with chuck about the data structure and the idea is that the transcripts will that get stored as a master there 'll be a master transcript which has in it everything that 's needed for both of these uses . and the one that 's used for speech recognition will be processed via scripts . , like , don 's been writing scripts and , to process it for the speech recognition side . discourse side will have this side over he the we 'll have a s ch , not being very fluent here . but , this the discourse side will have a script which will stri strip away the things which are non - communicative . ok . so then the then let 's think about the practicalities of how we get to that master copy with reference to breaths . so what i would r what i would wonder is would it be possible to encode those automatically ? could we get a breath detector ? postdoc f: , you just have no idea . , if you 're getting a breath several times every minute , and just simply the keystrokes it takes to negotiate , to put the boundaries in , to type it in , i it 's just a huge amount of time . postdoc f: and you wanna be it 's used , and you wanna be it 's done as efficiently as possible , and if it can be done automatically , that would be ideal . postdoc f: , ok . so now there 's another possibility which is , the time boundaries could mark off words from nonwords . and that would be extremely time - effective , if that 's sufficient . professor a: i ' m think if it 's too hard for us to annotate the breaths per se , we are gon na be building up models for these things and these things are somewhat self - aligning , so if so , we i if we say there is some a thing which we call a " breath " or a " breath - in " or " breath - out " , the models will learn that thing . , so but you do want them to point them at some region where the breaths really are . so postdoc f: and that would n't be a problem to have it , pause plus breath plus laugh plus sneeze ? professor a: , i there is there 's this dynamic tension between marking everything , as , and marking just a little bit and counting on the statistical methods . the more we can mark the better . but if there seems to be a lot of effort for a small amount of reward in some area , and this might be one like this although i 'd be interested to h get input from liz and andreas on this to see if they cuz they ' ve - they ' ve got lots of experience with the breaths in , their transcripts . professor a: actually , yes they do , but we can handle that without them here . but but , you were gon na say something about phd g: , , one possible way that we could handle it is that , as the transcribers are going through , and if they get a hunk of speech that they 're gon na transcribe , u th they 're gon na transcribe it because there 's words in there or whatnot . if there 's a breath in there , they could transcribe that . postdoc f: that 's what they ' ve been doing . so , within an overlap segment , they do this . phd g: but right . but if there 's a big hunk of speech , let 's say on morgan 's mike where he 's not talking , do n't worry about that . so what we 're saying is , there 's no guarantee that , so for the chunks that are transcribed , everything 's transcribed . but outside of those boundaries , there could have been that was n't transcribed . so you just somebody ca n't rely on that data and say " that 's perfectly clean data " . do you see what i ' m saying ? phd g: so i would say do n't tell them to transcribe anything that 's outside of a grouping of words . phd e: , and that 's that quite co corresponds to the way i try to train the speech - nonspeech detector , as i really try to not to detect those breaths which are not within a speech chunk but with which are just in a silence region . and they so they hopefully wo n't be marked in those channel - specific files . professor a: u i wanted to comment a little more just for clarification about this business about the different purposes . professor a: see , in a way this is a really key point , that for speech recognition , research , e a it 's not just a minor part . , the i would say the core thing that we 're trying to do is to recognize the actual , meaningful components in the midst of other things that are not meaningful . so it 's critical it 's not just incidental it 's critical for us to get these other components that are not meaningful . because that 's what we 're trying to pull the other out of . that 's our problem . if we had nothing if we had only linguistically - relevant things if we only had changes in the spectrum that were associated with words , with different spectral components , and , we did n't have noise , we did n't have convolutional errors , we did n't have extraneous , behaviors , and moving your head and all these sorts of things , then , actually speech recognition i is n't that bad right now . you can it 's the technology 's come along pretty . the the reason we still complain about it is because is when you have more realistic conditions then things fall apart . postdoc f: ok , fair enough . i , what i was wondering is what at what level does the breathing aspect enter into the problem ? because if it were likely that a pda would be able to be built which would get rid of the breathing , so it would n't even have to be processed at thi at this computational le , let me see , it 'd have to be computationally processed to get rid of it , but if there were , like likely on the frontier , a good breath extractor then , and then you 'd have to professor a: that and we do n't either . so it 's it right now it 's just raw d it 's just data that we 're collecting , and so we do n't wanna presuppose that people will be able to get rid of particular degradations because that 's actually the research that we 're trying to feed . so , an and maybe in five years it 'll work really , and it 'll only mess - up ten percent of the time , but then we would still want to account for that ten percent , postdoc f: i there 's another aspect which is that as we ' ve improved our microphone technique , we have a lot less breath in the more recent , recordings , so it 's in a way it 's an artifact that there 's so much on the earlier ones . phd g: one of the , just to add to this one of the ways that we will be able to get rid of breath is by having models for them . , that 's what a lot of people do nowadays . and so in order to build the model you need to have some amount of it marked , so that where the boundaries are . so , i do n't think we need to worry a lot about breaths that are happening outside of a , conversation . we do n't have to go and search for them to mark them , but , if they 're there while they 're transcribing some hunk of words , i 'd say put them in if possible . postdoc f: ok , and it 's also the fact that they differ a lot from one channel to the other because of the way the microphone 's adjusted .
topics discussed by the berkeley meeting recorder group included the status of the first test set of digits data , naming conventions for files , speaker identification tags , and encoding files with details about the recording. the group also discussed a proposal for a grant from the nsf's itr ( information technology research ) program , transcriptions , and efforts by speaker mn005 to detect speaker overlap using harmonicity-related features. particular focus was paid to questions about transcription procedures , i.e . how to deal with overlooked backchannels , and audible breaths. a small percentage of transcripts will be changed to reflect mis-read , uncorrected digits. a speaker database will be compiled to establish consistent links between speakers and their corresponding identification tags. sections of densely overlapping speech will require hand-checking so that overlooked backchannels may be manually segmented and labelled. the transcribers should only code audible breaths within a grouping of words , and not outside regions of continuous speech. it was further determined that audible breaths are an important facet of recorded speech , and that removing them from the corpus would be contrary to the aims of the project. speaker mn005 will prepare his results for detecting speaker overlap and present them in the next meeting. during digits readings , subjects tend to chunk numbers together rather than reading each number separately. when working from the mixed channel , transcribers may select only one start and end time for overlapping speech , resulting in points of overlap that are less tightly tuned. transcribers are likely to overlook backchannels in densely populated sections of speaker overlap. speaker mn014 reported that this is also problematic for the automatic detection of speech and non-speech , as backchannels that are very short and not loud enough will inevitably be overlooked. speaker mn005 reported problems distinguishing between possible harmonics and other frequency peaks , and creating an algorithm for obtaining the instantaneous frequency. the encoding of all audible breaths is too time-consuming. the first test set of digits is complete and includes 4,000 lines , each comprising between 1-10 digits. new digits forms were distributed for eliciting different prosodic groupings of numbers. new naming conventions were discussed as means for facilitating the sorting process. existing files will be changed so that all filenames are of equal length. similar changes will be made to speaker identification tags. files will also contain information specifying channel , microphone , and broadcaster information. a proposal is being drafted for a grant from the nsf's itr program for extending the research initiatives of the meeting recorder project. speaker fe008 is performing channel-by-channel transcriptions to create tighter time bins. tentative plans are to assign single channels to the transcriber pool and then piece them together afterwards. efforts by speaker mn005 are in progress to detect speaker overlap in the mixed signal using harmonicity-related features. for determining the instantaneous frequency , speaker me013 recommended deriving the maxima from energy multiples of a given frequency. it was also suggested that speaker mn005 should determine whether portions of the signal are voiced or unvoiced , as voiced intervals reflecting a relatively low fraction of energy in the harmonic sequence are likely to indicate sections of overlap.
###dialogue: professor a: ! maybe it 's just , how many t u how many times you crash in a day . phd g: or maybe it 's once you ' ve done enough meetings it wo n't crash on you anymore . professor a: that 's that 's great . do we have an agenda ? liz and andreas ca n't sh ca n't , ca n't come . grad b: i have no idea but got it a few minutes ago . right when you were in my office it arrived . grad b: so , does anyone have any a agenda items other than me ? i actually have one more also which is to talk about the digits . professor a: , right , so i was just gon na talk briefly about the nsf itr . professor a: , i wo n't say much , but then , you said wanna talk about digits ? grad b: i have a short thing about digits and then i wanna talk a little bit about naming conventions , although it 's unclear whether this is the right place to talk about it . so maybe just talk about it very briefly and take the details to the people who for whom it 's relevant . professor a: if we , we should n't add things in just to add things in . i ' m actually pretty busy today , so if we can we grad b: so the only thing i wanna say about digits is , we are done with the first test set . there are probably forms here and there that are marked as having been read that were n't really read . so i wo n't really know until i go through all the transcriber forms and extract out pieces that are in error . so i wa . two things . the first is what should we do about digits that were misread ? my opinion is , we should just throw them out completely , and have them read again by someone else . , the grouping is completely random , grad b: so it 's perfectly fine to put a group together again of errors and have them re - read , just to finish out the test set . grad b: , the other thing you could do is change the transcript to match what they really said . so those are the two options . professor a: but there 's often things where people do false starts . i know i ' ve done it , where i say a grad b: what the transcribers did with that is if they did a correction , and they eventually did read the right string , you extract the right string . phd g: , you 're talking about where they completely read the wrong string and did n't correct it ? postdoc f: , and s and you 're talking string - wise , you 're not talking about the entire page ? grad b: and so the two options are change the transcript to match what they really said , but then the transcript is n't the aurora test set anymore . i do n't think that really matters because the conditions are so different . and that would be a little easier . professor a: , i would , tak do the easy way , it it 's kinda , wh who knows what studies people will be doing on speaker - dependent things professor a: so that 's a couple hours of , speech , probably . which is a reasonable test set . grad b: and , jane , i do have a set of forms which you have copies of somewhere . grad b: , i was just wond i had all of them back from you . and then the other thing is that , the forms in front of us here that we 're gon na read later , were suggested by liz grad b: because she wanted to elicit some different prosodics from digits . and so , wanted people to , take a quick look at the instructions grad b: and the way it wa worked and see if it makes sense and if anyone has any comments on it . professor a: i see . and the decision here , was to continue with the words rather than the numerics . grad b: , yes , although we could switch it back . the problem was o and zero . although we could switch it back and tell them always to say " zero " or always to say " o " . professor a: or neither . but it 's just two thing ways that you can say it . professor a: right ? that 's the only thought i have because if you t start talking about these , u tr she 's trying to get at natural groupings , but it there 's nothing natural about reading numbers this way . grad b: the the problem also is she did want to stick with digits . i ' m speaking for her since she 's not here . but , the other problem we were thinking about is if you just put the numerals , they might say forty - three instead of four three . postdoc f: , if there 's space , though , between them . , you can with when you space them out they do n't look like , forty - three anymore . grad b: , she and i were talking about it , and she felt that it 's very , very natural to do that chunking . professor a: she 's right . it 's it 's a different problem . it 's a it 's an interesting problem , we ' ve done with numbers before , and sometimes people if you say s " three nine eight one " sometimes people will say " thirty - nine eighty - one " or " three hundred eighty - nine one " , or i do n't think they 'd say that , but th professor a: but , th thirty - eight ninety - one is probably how they 'd do it . grad b: so . , this is something that liz and i spoke about and , since this was something that liz asked for specifically , we need to defer to her . professor a: - . ok . , we 're probably gon na be collecting meetings for a while and if we decide we still wanna do some digits later we might be able to do some different ver different versions , professor a: but this is the next suggestion , so . ok , so e l i , let me , get my short thing out about the nsf . i sent this actually this is maybe a little side thing . , i sent to what we had , in some previous mail , as the right joint thing to send to , which was " m mtg rcdr hyphen joint " . grad b: it 's that 's because they set the one up at uw that 's not on our side , that 's on the u - dub side . and so u - uw set it up as a moderated list . grad b: and , i have no idea whether it actually ever goes to anyone so you might just wanna mail to mari professor a: no no , th i got , little excited notes from mari and jeff and so on , grad b: so the moderator actually did repost it . cuz i had sent one earlier actually the same thing happened to me i had sent one earlier . the message says , " you 'll be informed " and then i was never informed but i got replies from people indicating that they had gotten it , so . it 's just to prevent spam . professor a: so o ok . , anyway , i everybody here are y are you are on that list , right ? so you got the note ? ok . so this was , a , proposal that we put in before on more higher level , issues in meetings , from i higher level from my point of view . , and , meeting mappings , and , so is i for it was a proposal for the itr program , information technology research program 's part of national science foundation . it 's the second year of their doing , these grants . they 're they 're a lot of them are some of them anyway , are larger grants than the usual , small nsf grants , and . so , they 're very competitive , and they have a first phase where you put in pre - proposals , and we , got through that . and so th the next phase will be we 'll actually be doing a larger proposal . and i ' m i hope to be doing very little of it . and , which was also true for the pre - proposal , there 'll be bunch of people working on it . so . grad b: i that 's a good thing cuz that way i got my papers done early . professor a: my favorite is was when one reviewer says , " , this should be far more detailed " , and the nex the next reviewer says , " , there 's way too much detail " . grad b: or " this is way too general " , and the other reviewer says , " this is way too specific " . this is way too hard , way too easy . grad b: it sounded like they the first gate was pretty easy . is that right ? that they did n't reject a lot of the pre - proposals ? professor a: i should go back and look . i did n't i do n't think that 's true . professor a: but they have to weed out enough so that they have enough reviewers . so , , maybe they did n't r weed out as much as usual , but it 's usually a pretty but it . it 's it 's certainly not i ' m that it 's not down to one in two of what 's left . i ' m it 's , professor a: there 's different numbers of w awards for different size they have three size grants . this one there 's , see the small ones are less than five hundred thousand total over three years and that they have a fair number of them . and the large ones are , boy , i forget , more than a million and a half , more than two million like that . and and we 're in the middle category . we 're , , i forget what it was . but , i do n't remember , but it 's pr probably along the li i could be wrong on this , but probably along the lines of fifteen or that they 'll fund , or twenty . when they do you do how many they funded when they f in chuck 's , that he got last year ? grad b: it was smaller , that it was like four or five , was n't it ? professor a: last time they just had two categories , small and big , and this time they came up with a middle one , so it 'll there 'll be more of them that they fund than of the big . phd g: if we end up getting this , what will it mean to icsi in terms of , w wh where will the money go to , what would we be doing with it ? professor a: , it i none of it will go for those yachts that we ' ve talking about . grad b: it 's go higher level than we ' ve been talking about for meeting recorder . professor a: the other things that we have , been working on with , the c with communicator , especially with the newer things with the more acoustically - oriented things are lower level . and , this is dealing with , mapping on the level of , the conversation of mapping the conversations professor a: to different planes . so . but , . so it 's all that none of us are doing right now , or none of us are funded for , so it 's it would be new . phd g: so assuming everybody 's completely busy now , it means we 're gon na hafta , hire more students , or , something ? professor a: there 's evenings , and there 's weekends , there would be new hires , and there would be expansion , but , also , there 's always for everybody there 's always things that are dropping off , grants that are ending , or other things that are ending , so , professor a: there 's a continual need to bring in new things . but but there definitely would be new , students , professor a: we got we have , two of them are two in the c there 're two in the class already here , and then and , then there 's a third who 's doing a project here , who , but he wo n't be in the country that long , maybe another will end up . actually there is one other guy who 's looking that 's that guy , jeremy ? . anyway , that 's all i was gon na say is that 's , that 's and we 're sorta preceding to the next step , and , it 'll mean some more work , , in march in getting the proposal out , and then , it 's , we 'll see what happens . , the last one was that you had there , was about naming ? grad b: it just , we ' ve been cutting up sound files , in for ba both digits and for , doing recognition . and liz had some suggestions on naming and it just brought up the whole issue that has n't really been resolved about naming . so , one thing she would like to have is for all the names to be the same length so that sorting is easier . , same number of characters so that when you 're sorting filenames you can easily extract out bits and pieces that you want . and that 's easy enough to do . and i do n't think we have so many meetings that 's a big deal just to change the names . so that means , instead of calling it " mr one " , " mr two " , you 'd call it " mrm zero one " , mrm zero two , things like that . just so that they 're all the same length . postdoc f: but , when you , do things like that you can always as long as you have , you can always search from the beginning or the end of the string . grad b: alright , so we have th we 're gon na have the speaker id , the session , information on the microphones , grad b: and so if each one of those is a fixed length , the sorting becomes a lot easier . grad d: she wanted to keep them the same lengths across different meetings also . so like , the nsa meeting lengths , all filenames are gon na be the same length as the meeting recorder meeting names ? grad b: and as i said , the it 's we just do n't have that many that 's a big deal . grad b: and so , , at some point we have to take a few days off , let the transcribers have a few days off , make no one 's touching the data and reorganize the file structures . and when we do that we can also rationalize some of the naming . postdoc f: i would think though that the transcribe the transcripts themselves would n't need to have such lengthy names . so , you 're dealing with a different domain there , and with start and end times and all that , and channels and , grad b: right . so the only thing that would change with that is just the directory names , grad b: i would change them to match . so instead of being mr one it would be mrm zero one . but i do n't think that 's a big deal . grad b: so for m the meetings we were thinking about three letters and three numbers for meeting i ds . , for speakers , m or f and then three numbers , for , and , that also brings up the point that we have to start assembling a speaker database so that we get those links back and forth and keep it consistent . and then , the microphone issues . we want some way of specifying , more than looking in the " key " file , what channel and what mike . what channel , what mike , and what broadcaster . or i how to s say it . so with this one it 's this particular headset with this particular transmitter w as a wireless . and that one is a different headset and different channel . and so we just need some naming conventions on that . and , that 's gon na become especially important once we start changing the microphone set - up . we have some new microphones that i 'd like to start trying out , once i test them . and then we 'll need to specify that somewhere . so i was just gon na do a fixed list of , microphones and types . so , as i said professor a: , since we have such a short agenda list i wi i will ask how are the transcriptions going ? postdoc f: the the news is that i ' ve i s so in s so i ' ve switched to start my new sentence . i switched to doing the channel - by - channel transcriptions to provide , the , tighter time bins for partly for use in thilo 's work and also it 's of relevance to other people in the project . and , i discovered in the process a couple of interesting things , which , one of them is that , it seems that there are time lags involved in doing this , , using an interface that has so much more complexity to it . and i and i wanted to maybe ask , chuck to help me with some of the questions of efficiency . maybe i was thinking maybe the best way to do this in the long run may be to give them single channel parts and then piece them together later . and i have a script , piece them together . , so it 's like , i know that take them apart and put them together and i 'll end up with the representation which is where the real power of that interface is . and it may be that it 's faster to transcribe a channel at a time with only one , sound file and one , set of , utterances to check through . professor a: i ' m a little confused . that one of the reason we thought we were so much faster than , the other transcription , thing was that we were using the mixed file . postdoc f: , yes . ok . but , with the mixed , when you have an overlap , you only have a choice of one start and end time for that entire overlap , which means that you 're not tightly , tuning the individual parts th of that overlap by different speakers . so someone may have only said two words in that entire big chunk of overlap . and for purposes of , things like , so things like training the speech - nonspeech segmentation thing . th - it 's necessary to have it more tightly tuned than that . and w and , is a it would be wonderful if , it 's possible then to use that algorithm to more tightly tie in all the channels after that but , , i ' ve th the so , i exactly where that 's going at this point . but m i was experimenting with doing this by hand and i really do think that it 's wise that we ' ve had them start the way we have with , m y working off the mixed signal , having the interface that does n't require them to do the ti , the time bins for every single channel at a t , through the entire interaction . , i did discover a couple other things by doing this though , and one of them is that , once in a while a backchannel will be overlooked by the transcriber . as you might expect , because when it 's a b backchannel could happen in a very densely populated overlap . and if we 're gon na study types of overlaps , which is what i wanna do , an analysis of that , then that really does require listening to every single channel all the way through the entire length for all the different speakers . now , for only four speakers , that 's not gon na be too much time , but if it 's nine speakers , then that i that is more time . so it 's li , wondering it 's like this it 's really valuable that thilo 's working on the speech - nonspeech segmentation because maybe , we can close in on that wi without having to actually go to the time that it would take to listen to every single channel from start to finish through every single meeting . phd e: , but those backchannels will always be a problem . especially if they 're really short and they 're not very loud and so it can it will always happen that also the automatic s detection system will miss some of them , so . postdoc f: so then , maybe the answer is to , listen especially densely in places of overlap , just so that they 're not being overlooked because of that , and count on accuracy during the sparser phases . cuz there are large s spaces of the that 's a good point . there are large spaces where there 's no overlap . someone 's giving a presentation , or whatever . that 's that 's a good thought . and , let 's see , there was one other thing i was gon na say . it 's really interesting data to work with , i have to say , it 's very enjoyable . i really , not a problem spending time with these data . really interesting . and not just because i ' m in there . no , it 's real interesting . professor a: , it 's a short meeting . , you 're still in the midst of what you 're doing from what you described last time , i assume , phd c: i have n't results , yet but , i ' m continue working with the mixed signal now , after the last experience . phd c: and and i ' m tried to , adjust the to improve , an harmonicity , detector that , i implement . but i have problem because , i get , , very much harmonics now . , harmonic possi possible harmonics , and now i ' m trying to find , some a , of h of help , using the energy to distinguish between possible harmonics , and other fre frequency peaks , that , corres not harmonics . and , i have to talk with y with you , with the group , about the instantaneous frequency , because i have , an algorithm , and , i get , mmm , t results similar results , like , the paper , that i am following . but , the rules , that , people used in the paper to distinguish the harmonics , is does n't work . and i not that i , the way o to ob the way to obtain the instantaneous frequency is right , or it 's not right . i have n't enough file feeling to distinguish what happened . professor a: , i 'd like to talk with you about it . if if , if i do n't have enough time and y you wanna discuss with someone else some someone else besides us that you might want to talk to , might be stephane . phd g: is is this the algorithm where you hypothesize a fundamental , and then get the energy for all the harmonics of that fundamental ? phd c: no . i do n't proth process the fundamental . i , ehm i calculate the phase derivate using the fft . and the algorithm said that , if you change the , nnn the x - the frequency " x " , using the in the instantaneous frequency , you can find , how , in several frequencies that proba probably the harmonics , the errors of peaks the frequency peaks , move around these , frequency harmonic the frequency of the harmonic . and , if you compare the instantaneous frequency , of the , continuous , , filters , that , they used , to get , the instantaneous frequency , it probably too , you can find , that the instantaneous frequency for the continuous , the output of the continuous filters are very near . and in my case i in equal with our signal , it does n't happened . professor a: . i 'd hafta look at that and think about it . it 's it 's i have n't worked with that either so i ' m not the way the simple - minded way i suggested was what chuck was just saying , is that you could make a sieve . , y you actually say that here is let 's let 's hypothesize that it 's this frequency or that frequency , maybe you could use some other cute methods to , short cut it by , making some guesses , but uh , i would , you could make some guesses from , from the auto - correlation but then , given those guesses , try , only looking at the energy at multiples of the of that frequency , and see how much of the take the one that 's maximum . call that the phd c: but , i know many people use , low - pass filter to get , the pitch . phd g: but i but the harmonics are gon na be , , i what the right word is . they 're gon na be dampened by the , vocal tract , right ? the response of the vocal tract . and so just looking at the energy on those at the harmonics , is that gon na ? phd g: i m what you 'd like to do is get rid of the effect of the vocal tract . right ? and just look at the signal coming out of the glottis . professor a: but i do n't need if you need to get rid of it . that 'd be but i if it 's ess if it 's essential . , cuz the main thing is that , you 're trying wha what are you doing this for ? you 're trying distinguish between the case where there is , where there are more than , where there 's more than one speaker and the case where there 's only one speaker . so if there 's more than one speaker , i you could i you 're so you 're not distinguished between voiced and unvoiced , so , i if you do n't care about that see , if you also wanna just determine if you also wanna determine whether it 's unvoiced , then you want to look at high frequencies also , because the f the fact that there 's more energy in the high frequencies is gon na be an ob obvious cue that it 's unvoiced . but , i but , other than that i as far as the one person versus two persons , it would be primarily a low frequency phenomenon . and if you looked at the low frequencies , yes the higher frequencies are gon na there 's gon na be a spectral slope . the higher frequencies will be lower energy . but so what . that 's w phd c: i will prepare for the next week , all my results about the harmonicity and will try to come in and to discuss here , because , i have n't enough feeling to u many time to understand what happened with the with , so many peaks , , and i see the harmonics there many time but , there are a lot of peaks , that , they are not harmonics . i have to discover what is the w the best way to c to use them professor a: , but i do n't think you can you 're not gon na be able to look at every frame , so i really thought that the best way to do it , and i ' m speaking with no experience on this particular point , but , my impression was that the best way to do it was however you you ' ve used instantaneous frequency , whatever . however you ' ve come up you with your candidates , you wanna see how much of the energy is in that as coppo as opposed to all of the all the total energy . and , if it 's voiced , i so y maybe you do need a voiced - unvoiced determination too . but if it 's voiced , and the , e the fraction of the energy that 's in the harmonic sequence that you 're looking at is relatively low , then it should be then it 's more likely to be an overlap . phd c: is height . this this is the idea i had to compare the ratio of the energy of the harmonics with the , total energy in the spectrum and try to get a ratio to distinguish between overlapping and speech . professor a: but you 're looking a y you 're looking at let 's take a second with this . you 're looking at f at the phase derivative , in , what domain ? this is in bands ? or or phd c: it 's a it 's a o i w the band is , from zero to four kilohertz . and i ot i phd c: . i u m t i used two m two method two methods . , one , based on the f , ftt . to fft to obtain the or to study the harmonics from the spectrum directly , and to study the energy and the multiples of frequency . and another algorithm i have is the in the instantaneous frequency , based on the fft to calculate the phase derivate in the time . , n the d i have two algorithms . but , in m i in my opinion the instantaneous frequency , the behavior , was th it was very interesting . because i saw , how the spectrum concentrate , around the harmonic . but then when i apply the rule , of the in the instantaneous frequency of the ne of the continuous filter in the near filter , the rule that , people propose in the paper does n't work . and i why . professor a: but the instantaneous frequency , would n't that give you something more like the central frequency of the , of the where most of the energy is ? , if you does i does it why would it correspond to pitch ? phd c: i get the spectrum , and i represent all the frequency . and when ou i obtained the instantaneous frequency . and i change the @ , using the instantaneous frequency , here . professor a: , so you scale you s you do a scaling along that axis according to instantaneous phd c: because when , when i i use these frequency , the range is different , and the resolution is different . i observe more or less , thing like this . the paper said that , these frequencies are probably , harmonics . but , they used , a rule , based in the because to calculate the instantaneous frequency , they use a hanning window . and , they said that , if these peak are , harmonics , f of the contiguous , w , filters are very near , or have to be very near . but , phh ! i do n't i don i and i what is the distance . and i tried to put different distance , to put difference , length of the window , different front sieve , pfff ! and i not what happened . professor a: ok , i ' m not following it enough . i 'll probably gon na hafta look at the paper , but which i ' m not gon na have time to do in the next few days , but i ' m curious about it . postdoc f: i did i it did occur to me that this is , the return to the transcription , that there 's one third thing i wanted to ex raise as a to as an issue which is , how to handle breaths . so , i wanted to raise the question of whether people in speech recognition want to know where the breaths are . and the reason i ask the question is , aside from the fact that they 're very time - consuming to encode , the fact that there was some i had the indication from dan ellis in the email that i sent to you , and about , that in principle we might be able to , handle breaths by accessi by using cross - talk from the other things , be able that in principle , maybe we could get rid of them , so maybe and i was i , we had this an and i did n't could n't get back to you , but the question of whether it 'd be possible to eliminate them from the audio signal , which would be the ideal situation , professor a: we - see , we 're dealing with real speech and we 're trying to have it be as real as possible and breaths are part of real speech . postdoc f: , except that these are really truly , ther there 's a segment in o the one i did n the first one that i did for i for this , where truly w we 're hearing you breathing like as if we 're you 're in our ear , and it 's like i y i , breath is natural , but not postdoc f: except that we 're trying to mimic , i see what you 're saying . you 're saying that the pda application would have , have to cope with breath . grad b: but more people than just pda users are interested in this corpus . so so mean you 're right grad b: but we do n't wanna w remove it from the corpus , in terms of delivering it because the people will want it in there . professor a: i right . if if it gets in the way of what somebody is doing with it then you might wanna have some method which will allow you to block it , but you it 's real data . you do n't wanna b but you do n't professor a: if s , if there 's a little bit of noise out there , and somebody is talking about something they 're doing , that 's part of what we accept as part of a real meeting , even and we have the f the fan and the in the projector up there , and , this is it 's this is actual that we wanna work with . postdoc f: this is in very interesting because i it has a i it shows very clearly the contrast between , speech recognition research and discourse research because in discourse and linguistic research , what counts is what 's communit communicative . and breath , everyone breathes , they breathe all the time . and once in a while breath is communicative , but r very rarely . ok , so now , i had a discussion with chuck about the data structure and the idea is that the transcripts will that get stored as a master there 'll be a master transcript which has in it everything that 's needed for both of these uses . and the one that 's used for speech recognition will be processed via scripts . , like , don 's been writing scripts and , to process it for the speech recognition side . discourse side will have this side over he the we 'll have a s ch , not being very fluent here . but , this the discourse side will have a script which will stri strip away the things which are non - communicative . ok . so then the then let 's think about the practicalities of how we get to that master copy with reference to breaths . so what i would r what i would wonder is would it be possible to encode those automatically ? could we get a breath detector ? postdoc f: , you just have no idea . , if you 're getting a breath several times every minute , and just simply the keystrokes it takes to negotiate , to put the boundaries in , to type it in , i it 's just a huge amount of time . postdoc f: and you wanna be it 's used , and you wanna be it 's done as efficiently as possible , and if it can be done automatically , that would be ideal . postdoc f: , ok . so now there 's another possibility which is , the time boundaries could mark off words from nonwords . and that would be extremely time - effective , if that 's sufficient . professor a: i ' m think if it 's too hard for us to annotate the breaths per se , we are gon na be building up models for these things and these things are somewhat self - aligning , so if so , we i if we say there is some a thing which we call a " breath " or a " breath - in " or " breath - out " , the models will learn that thing . , so but you do want them to point them at some region where the breaths really are . so postdoc f: and that would n't be a problem to have it , pause plus breath plus laugh plus sneeze ? professor a: , i there is there 's this dynamic tension between marking everything , as , and marking just a little bit and counting on the statistical methods . the more we can mark the better . but if there seems to be a lot of effort for a small amount of reward in some area , and this might be one like this although i 'd be interested to h get input from liz and andreas on this to see if they cuz they ' ve - they ' ve got lots of experience with the breaths in , their transcripts . professor a: actually , yes they do , but we can handle that without them here . but but , you were gon na say something about phd g: , , one possible way that we could handle it is that , as the transcribers are going through , and if they get a hunk of speech that they 're gon na transcribe , u th they 're gon na transcribe it because there 's words in there or whatnot . if there 's a breath in there , they could transcribe that . postdoc f: that 's what they ' ve been doing . so , within an overlap segment , they do this . phd g: but right . but if there 's a big hunk of speech , let 's say on morgan 's mike where he 's not talking , do n't worry about that . so what we 're saying is , there 's no guarantee that , so for the chunks that are transcribed , everything 's transcribed . but outside of those boundaries , there could have been that was n't transcribed . so you just somebody ca n't rely on that data and say " that 's perfectly clean data " . do you see what i ' m saying ? phd g: so i would say do n't tell them to transcribe anything that 's outside of a grouping of words . phd e: , and that 's that quite co corresponds to the way i try to train the speech - nonspeech detector , as i really try to not to detect those breaths which are not within a speech chunk but with which are just in a silence region . and they so they hopefully wo n't be marked in those channel - specific files . professor a: u i wanted to comment a little more just for clarification about this business about the different purposes . professor a: see , in a way this is a really key point , that for speech recognition , research , e a it 's not just a minor part . , the i would say the core thing that we 're trying to do is to recognize the actual , meaningful components in the midst of other things that are not meaningful . so it 's critical it 's not just incidental it 's critical for us to get these other components that are not meaningful . because that 's what we 're trying to pull the other out of . that 's our problem . if we had nothing if we had only linguistically - relevant things if we only had changes in the spectrum that were associated with words , with different spectral components , and , we did n't have noise , we did n't have convolutional errors , we did n't have extraneous , behaviors , and moving your head and all these sorts of things , then , actually speech recognition i is n't that bad right now . you can it 's the technology 's come along pretty . the the reason we still complain about it is because is when you have more realistic conditions then things fall apart . postdoc f: ok , fair enough . i , what i was wondering is what at what level does the breathing aspect enter into the problem ? because if it were likely that a pda would be able to be built which would get rid of the breathing , so it would n't even have to be processed at thi at this computational le , let me see , it 'd have to be computationally processed to get rid of it , but if there were , like likely on the frontier , a good breath extractor then , and then you 'd have to professor a: that and we do n't either . so it 's it right now it 's just raw d it 's just data that we 're collecting , and so we do n't wanna presuppose that people will be able to get rid of particular degradations because that 's actually the research that we 're trying to feed . so , an and maybe in five years it 'll work really , and it 'll only mess - up ten percent of the time , but then we would still want to account for that ten percent , postdoc f: i there 's another aspect which is that as we ' ve improved our microphone technique , we have a lot less breath in the more recent , recordings , so it 's in a way it 's an artifact that there 's so much on the earlier ones . phd g: one of the , just to add to this one of the ways that we will be able to get rid of breath is by having models for them . , that 's what a lot of people do nowadays . and so in order to build the model you need to have some amount of it marked , so that where the boundaries are . so , i do n't think we need to worry a lot about breaths that are happening outside of a , conversation . we do n't have to go and search for them to mark them , but , if they 're there while they 're transcribing some hunk of words , i 'd say put them in if possible . postdoc f: ok , and it 's also the fact that they differ a lot from one channel to the other because of the way the microphone 's adjusted . ###summary: topics discussed by the berkeley meeting recorder group included the status of the first test set of digits data , naming conventions for files , speaker identification tags , and encoding files with details about the recording. the group also discussed a proposal for a grant from the nsf's itr ( information technology research ) program , transcriptions , and efforts by speaker mn005 to detect speaker overlap using harmonicity-related features. particular focus was paid to questions about transcription procedures , i.e . how to deal with overlooked backchannels , and audible breaths. a small percentage of transcripts will be changed to reflect mis-read , uncorrected digits. a speaker database will be compiled to establish consistent links between speakers and their corresponding identification tags. sections of densely overlapping speech will require hand-checking so that overlooked backchannels may be manually segmented and labelled. the transcribers should only code audible breaths within a grouping of words , and not outside regions of continuous speech. it was further determined that audible breaths are an important facet of recorded speech , and that removing them from the corpus would be contrary to the aims of the project. speaker mn005 will prepare his results for detecting speaker overlap and present them in the next meeting. during digits readings , subjects tend to chunk numbers together rather than reading each number separately. when working from the mixed channel , transcribers may select only one start and end time for overlapping speech , resulting in points of overlap that are less tightly tuned. transcribers are likely to overlook backchannels in densely populated sections of speaker overlap. speaker mn014 reported that this is also problematic for the automatic detection of speech and non-speech , as backchannels that are very short and not loud enough will inevitably be overlooked. speaker mn005 reported problems distinguishing between possible harmonics and other frequency peaks , and creating an algorithm for obtaining the instantaneous frequency. the encoding of all audible breaths is too time-consuming. the first test set of digits is complete and includes 4,000 lines , each comprising between 1-10 digits. new digits forms were distributed for eliciting different prosodic groupings of numbers. new naming conventions were discussed as means for facilitating the sorting process. existing files will be changed so that all filenames are of equal length. similar changes will be made to speaker identification tags. files will also contain information specifying channel , microphone , and broadcaster information. a proposal is being drafted for a grant from the nsf's itr program for extending the research initiatives of the meeting recorder project. speaker fe008 is performing channel-by-channel transcriptions to create tighter time bins. tentative plans are to assign single channels to the transcriber pool and then piece them together afterwards. efforts by speaker mn005 are in progress to detect speaker overlap in the mixed signal using harmonicity-related features. for determining the instantaneous frequency , speaker me013 recommended deriving the maxima from energy multiples of a given frequency. it was also suggested that speaker mn005 should determine whether portions of the signal are voiced or unvoiced , as voiced intervals reflecting a relatively low fraction of energy in the harmonic sequence are likely to indicate sections of overlap.
31
professor a: we 're going ? ok . sh - close your door on the way out ? professor a: probably wanna get this other door , too . so . what are we talking about today ? professor a: the both the sri system and the oth and for one thing that shows the difference between having a lot of training data or not , professor a: , the the best number we have on the english on near microphone only is three or four percent . professor a: and it 's significantly better than that , using fairly simple front - ends on the , with the sri system . so i th that the but that 's using a pretty huge amount of data , mostly not digits , but then again , . , mostly not digits for the actual training the h m ms whereas in this case we 're just using digits for training the h m professor a: did anybody mention about whether the sri system is a is doing the digits the wor as a word model or as a sub s sub - phone states ? phd e: . so , because it 's their very d huge , their huge system . and . but . so . there is one difference , the sri system the result for the sri system that are represented here are with adaptation . so there is it 's their complete system and including on - line unsupervised adaptation . phd e: and if you do n't use adaptation , the error rate is around fifty percent worse , if i remember . professor a: still . but but what i 'd be interested to do given that , is that we should take i that somebody 's gon na do this , right ? is to take some of these tandem things and feed it into the sri system , phd e: but i the main point is the data because i am not . our back - end is fairly simple but until now , the attempts to improve it or have fail , what chuck tried to do professor a: , but he 's doing it with the same data , right ? so to so there 's two things being affected . professor a: . one is that , there 's something simple that 's wrong with the back - end . we ' ve been playing a number of states i if he got to the point of playing with the number of gaussians yet but , but , so far he had n't gotten any big improvement , but that 's all with the same amount of data which is pretty small . and . professor a: , you could do that , but i ' m saying even with it not with that part not retrained , just using having the h m ms much better h m professor a: . but just train those h m ms using different features , the features coming from our aurora . phd e: but what would be interesting to see also is what perhaps it 's not related , the amount of data but the recording conditions . i . because it 's probably not a problem of noise , because our features are supposed to be robust to noise . it 's not a problem of channel , because there is normalization with respect to the channel . so professor a: i ' m . what what is the problem that you 're trying to explain ? phd e: the the fact that the result with the tandem and aurora system are so much worse . professor a: that the so much worse ? i but i ' m almost certain that it , that it has to do with the amount of training data . professor a: but but having a huge if if you look at what commercial places do , they use a huge amount of data . this is a modest amount of data . professor a: so . , ordinarily you would say " , given that you have enough occurrences of the digits , you can just train with digits rather than with , " but , if you have a huge in other words , do word models but if you have a huge amount of data then you 're going to have many occurrences of similar allophones . and that 's just a huge amount of training for it . so it 's it has to be that , because , as you say , this is near - microphone , it 's really pretty clean data . now , some of it could be the fact that let 's see , in the in these multi - train things did we include noisy data in the training ? , that could be hurting us actually , for the clean case . phd e: , actually we see that the clean train for the aurora proposals are better than the multi - train , professor a: it is if cuz this is clean data , and so that 's not too surprising . but . phd e: , o i what i meant is that , let 's say if we add enough data to train on the meeting recorder digits , i we could have better results than this . phd e: what i meant is that perhaps we can learn something from this , what 's wrong what is different between ti - digits and these digits and professor a: so in the actual ti - digits database we 're getting point eight percent , and here we 're getting three or four three , let 's see , three for this ? , but , point eight percent is something like double or triple what people have gotten who ' ve worked very hard at doing that . and and also , as you point out , there 's adaptation in these numbers also . so if you , put the ad adap take the adaptation off , then it for the english - near you get something like two percent . and here you had , something like three point four . and i could easily see that difference coming from this huge amount of data that it was trained on . so it 's , i do n't think there 's anything magical here . it 's , we used a simple htk system with a modest amount of data . and this is a , modern system has a lot of points to it . so . , the htk is an older htk , even . it 's not that surprising . but to me it just meant a practical point that if we want to publish results on digits that people pay attention to we probably should cuz we ' ve had the problem before that you get show some improvement on something that 's , it seems like too large a number , and people do n't necessarily take it so . so the three point four percent for this is so why is it it 's an interesting question though , still . why is why is it three point four percent for the d the digits recorded in this environment as opposed to the point eight percent for the original ti - digits database ? professor a: just looking at the ti - di the tandem system , if we 're getting point eight percent , which , yes , it 's high . it 's , it 's not awfully high , but it 's , it 's high . why is it four times as high , or more ? professor a: right ? , there 's even though it 's close - miked there 's still there really is background noise . and i suspect when the ti - digits were recorded if somebody fumbled or said something wrong that they probably made them take it over . it was not there was no attempt to have it be realistic in any sense . phd e: and acoustically , it 's q it 's i listened . it 's quite different . ti - digit is it 's very , very clean and it 's like studio recording whereas these meeting recorder digits sometimes you have breath noise professor a: bless you . i . it 's so . yes . it 's it 's the indication it 's harder . , i that 's true either way . so take a look at the , the sri results . , they 're much better , but still you 're getting something like one point three percent for things that are same data as in t ti - digits the same text . and , i ' m the same system would get , point three or point four on the actual ti - digits . so this , on both systems the these digits are showing up as harder . which i find interesting this is closer to it 's still read . but i still think it 's much closer to what people actually face , when they 're dealing with people saying digits over the telephone . i do n't think , i ' m they would n't release the numbers , but i do n't think that the companies that do telephone speech get anything like point four percent on their digits . i ' m they get , for one thing people do phone up who do n't have middle america accents and it 's a we it 's us . it has many people who sound in many different ways . that was that topic . what else we got ? did we end up giving up on , any eurospeech submissions , or ? i know thilo and dan ellis are submitting something , but . phd e: i e the only thing with these the meeting recorder and , so , we gave up . professor a: . now , actually for the aur - we do have for aurora , right ? because because we have ano an extra month . professor a: , that 's fine . so th so we have a couple little things on meeting recorder and we have we do n't we do n't have to flood it with papers . we 're not trying to prove anything to anybody . so . that 's fine . anything else ? phd e: . so . perhaps that we ' ve been working on is , we have put the good vad in the system and it really makes a huge difference . so , , this is perhaps one of the reason why our system was not the best , because with the new vad , it 's very the results are similar to the france telecom results and perhaps even better sometimes . so there is this point . the problem is that it 's very big and we still have to think how to where to put it and , because it , this vad either some delay and we if we put it on the server side , it does n't work , because on the server side features you already have lda applied from the f from the terminal side and so you accumulate the delay so the vad should be before the lda which means perhaps on the terminal side and then smaller and phd e: it 's from ogi . so it 's the network trained it 's the network with the huge amounts on hidden of hidden units , and nine input frames compared to the vad that was in the proposal which has a very small amount of hidden units and fewer inputs . professor a: this is the one they had originally ? , but they had to get rid of it because of the space , did n't they ? phd e: but the abso assumption is that we will be able to make a vad that 's small and that works fine . and . so we can professor a: but the other thing is to use a different vad entirely . , i if there 's a if i what the thinking was amongst the etsi folk but if everybody let 's use this vad and take that out of there phd e: they just want , they do n't want to fix the vad because they think there is some interaction between feature extraction and vad or frame dropping but they still want to just to give some requirement for this vad because it 's it will not be part of they do n't want it to be part of the standard . so it must be at least somewhat fixed but not completely . so there just will be some requirements that are still not yet ready . professor a: determined . but i was thinking that s " , there may be some interaction , but i do n't think we need to be stuck on using our or ogi 's vad . we could use somebody else 's if it 's smaller or , as long as it did the job . so that 's good . phd e: . so there is this thing . there is i designed a new filter because when i designed other filters with shorter delay from the lda filters , there was one filter with fif sixty millisecond delay and the other with ten milliseconds and hynek suggested that both could have sixty - five sixty - s it 's sixty - five . both should have sixty - five because phd e: and . so i did that and it 's running . so , let 's see what will happen . but the filter is closer to the reference filter . . professor a: so that means logically , in principle , it should be better . so probably it 'll be worse . or in the basic perverse nature of reality . phd e: , and then we ' ve started to work with this of voiced - unvoiced . and next week we will perhaps try to have a new system with msg stream also see what happens . so , something that 's similar to the proposal too , but with msg stream . phd d: no , i w i begin to play with matlab and to found some parameter robust for voiced - unvoiced decision . but only to play . and we they we found that maybe w is a classical parameter , the sq the variance between the fft of the signal and the small spectrum of time we after the mel filter bank . and , is more or less robust . is good for clean speech . is quite good for noisy speech . but we must to have bigger statistic with timit , and is not ready yet to use on , i . phd d: i have here . i have here for one signal , for one frame . the the mix of the two , noise and unnoise , and the signal is this . clean , and this noise . these are the two the mixed , the big signal is for clean . professor a: , i ' m s there 's none of these axes are labeled , so i what this what 's this axis ? phd d: , this is energy , log - energy of the spectrum . of the this is the variance , the difference { nonvocalsound } between the spectrum of the signal and fft of each frame of the signal and this mouth spectrum of time after the f may fit for the two , phd d: this big , to here , they are to signal . this is for clean and this is for noise . phd d: and this is the noise portion . and this is more or less like this . but i meant to have see @ two the picture . this is , for one frame . the spectrum of the signal . and this is the small version of the spectrum after ml mel filter bank . phd d: and this is this is not the different . this is trying to obtain with lpc model the spectrum but using matlab without going factor and s phd d: and the that this is good . this is quite similar . this is another frame . ho how i obtained the envelope , { nonvocalsound } this envelope , with the mel filter bank . professor a: so now i wonder , do you want to i know you want to get at something orthogonal from what you get with the smooth spectrum . but if you were to really try and get a voiced - unvoiced , do you want to ignore that ? , do you , clearly a very big cues for voiced - unvoiced come from spectral slope and so on , phd d: because when did noise clear { nonvocalsound } in these section is clear if s @ { nonvocalsound } val value is indicative that is a voice frame and it 's low values professor a: , you probably want , certainly if you want to do good voiced - unvoiced detection , you need a few features . each each feature is by itself not enough . but , people look at slope and first auto - correlation coefficient , divided by power . or or there 's i we prob probably do n't have enough computation to do a simple pitch detector ? with a pitch detector you could have a an estimate of what the or maybe you could you just do it going through the p fft 's figuring out some probable harmonic structure . and and . phd d: you have read up and you have a paper , the paper that you s give me yesterday . they say that yesterday they are some { nonvocalsound } problem phd e: , there is th this fact actually . if you look at this spectrum , what 's this again ? is it the mel - filters ? phd e: ok . so the envelope here is the output of the mel - filters and what we clearly see is that in some cases , and it clearly appears here , and the harmonics are resolved by the f , there are still appear after mel - filtering , and it happens for high pitched voice because the width of the lower frequency mel - filters is sometimes even smaller than the pitch . it 's around one hundred , one hundred and fifty hertz nnn . and so what happens is that this , add additional variability to this envelope so we were thinking to modify the mel - spectrum to have something that 's smoother on low frequencies . professor a: separate thing ? maybe so . so , what what i was talking about was just , starting with the fft you could do a very rough thing to estimate pitch . and , given that , you could come up with some estimate of how much of the low frequency energy was explained by those harmonics . it 's a variant on what you 're s what you 're doing . the , the mel does give a smooth thing . but as you say it 's not that smooth here . and and so if you just subtracted off your of the harmonics then something like this would end up with quite a bit lower energy in the first fifteen hundred hertz or so and our first kilohertz , even . and if was noisy , the proportion that it would go down would be if it was unvoiced . so you oughta be able to pick out voiced segments . at least it should be another cue . so . anyway . ok ? that 's what 's going on . what 's up with you ? grad b: our t i went to talk with mike jordan this week { nonvocalsound } and shared with him the ideas about extending the larry saul work and i asked him some questions about factorial h m so like later down the line when we ' ve come up with these feature detectors , how do we , model the time series that happens and we talked a little bit about factorial h m ms and how when you 're doing inference or w when you 're doing recognition , there 's like simple viterbi that you can do for these h m and the great advantages that a lot of times the factorial h m ms do n't over - alert the problem there they have a limited number of parameters and they focus directly on the sub - problems at hand so you can imagine five or so parallel features transitioning independently and then at the end you couple these factorial h m ms with undirected links based on some more data . so he seemed like really interested in this and said this is something very do - able and can learn a lot and , i ' ve just been continue reading about certain things . thinking of maybe using m modulation spectrum to as features also in the sub - bands because it seems like the modulation spectrum tells you a lot about the intelligibility of certain words and so , . just that 's about it . grad c: ok . and so i ' ve been looking at avendano 's work and i 'll try to write up in my next stat status report a description of what he 's doing , but it 's an approach to deal with reverberation or that the aspect of his work that i ' m interested in the idea is that normally an analysis frames are too short to encompass reverberation effects in full . you miss most of the reverberation tail in a ten millisecond window and so you 'd like it to be that the reverberation responses simply convolved in , but it 's not really with these ten millisecond frames cuz you j but if you take , say , a two millisecond window i ' m a two second window then in a room like this , most of the reverberation response is included in the window and the then it then things are l more linear . it is it is more like the reverberation response is simply c convolved and you can use channel normalization techniques like in his thesis he 's assuming that the reverberation response is fixed . he just does mean subtraction , which is like removing the dc component of the modulation spectrum and that 's supposed to d deal pretty with the reverberation and the neat thing is you ca n't take these two second frames and feed them to a speech recognizer so he does this method training trading the spectral resolution for time resolution and come ca synthesizes a new representation which is with say ten second frames but a lower s frequency resolution . so i do n't really know the theory . i it 's these are called " time frequency representations " and h he 's making the time sh finer grained and the frequency resolution less fine grained . s so i ' m i my first stab actually in continuing his work is to re - implement this thing which changes the time and frequency resolutions cuz he does n't have code for me . so that 'll take some reading about the theory . i do n't really know the theory . , and , another f first step is , so the way i want to extend his work is make it able to deal with a time varying reverberation response we do n't really know how fast the reverberation response is varying the meeting recorder data so we have this block least squares imp echo canceller implementation and i want to try finding the response , say , between a near mike and the table mike for someone using the echo canceller and looking at the echo canceller taps and then see how fast that varies from block to block . that should give an idea of how fast the reverberation response is changing . grad c: s so y you do you read some of the zeros as o 's and some as zeros . is there a particular way we 're supposed to read them ? professor a: no . " o " o " o " and " zero " are two ways that we say that digit . phd e: perhaps in the sheets there should be another sign for the if we want to the guy to say " o " or professor a: no . people will do what they say . it 's ok . in digit recognition we ' ve done before , you have two pronunciations for that value , " o " and " zero " . phd e: but it 's perhaps more difficult for the people to prepare the database then , if because here you only have zeros professor a: they write down oh . or they write down zero a and they each have their own pronunciation . phd e: but if the sh the sheet was prepared with a different sign for the " o " . professor a: but people would n't that wa there is no convention for it . see . , you 'd have to tell them " ok when we write this , say it tha " , and you just they just want people to read the digits as you ordinarily would and people say it different ways . grad c: is this a change from the last batch of forms ? because in the last batch it was spelled out which one you should read . professor a: yes . that 's right . it was it was spelled out , and they decided they wanted to get at more the way people would really say things . professor a: that 's also why they 're bunched together in these different groups . so so it 's so it 's everything 's fine . actually , let me just s since you brought it up , i was just it was hard not to be self - conscious about that when it after we since we just discussed it . but i realized that when i ' m talking on the phone , certainly , and saying these numbers , i almost always say zero . and cuz because i it 's two syllables . it 's it 's more likely they 'll understand what i said . so that 's the habit i ' m in , but some people say " o " and
the main purpose of the meeting of icsi's meeting recorder group at berkeley was to discuss the recent progress of it's members. this includes reports on the progress of the groups main digit recogniser project , with interest on voice-activity detectors and voiced/unvoiced detection , work on acoustic feature detection , and research into dealing with reverberation. there was also talk of comparing different recognition systems and training datasets , and a discussion of the pronunciation of the digit zero for the recording at the end of the meeting. in his next status report , me026 will summarise the work he has been researching. the digit recognition system is still not working well enough , they must get better results if they want to publish and be noticed. they have not really made many improvements , which may be due to their comparatively small training set , or the conditions the data is recorded under. the new vad is quite a large network , and adds a delay to the process. this caused ogi to drop it , though speaker mn007 is assuming that a smaller and equally effective system can be developed. the alternative is to get yet another vad form somewhere else , though it's not clear if they will even be required in the final system. there are some problems with the voiced/unvoiced feature detection , because some pitches are slipping through the filtering. the group have been comparing their recognition system to a few others , and theirs has not come off favourably. there could be many reasons for this , including smaller training set , more realistic data , or older technology. speaker mn007 has put the best voice activity detector into the system , to great improvements along with designing new filters that run at the correct latency. speaker fn002 has started to find parameters for voiced/unvoiced feature detection , and has found some classic ones , although there are other things she wishes to look at. me013 offers a few ideas of simple things she may want to try , as he is not confident with everything she is trying. speaker me006 is continuing with the idea of extending work on acoustic feature detection. he is continuing to read , and has discussed the suitability of factorial hmms with a colleague. speaker me026 has been learning more about previous work on reverberation , and is ready to start with a re-implementation of the theory. from there he wants to extend the work to look at time-varying reverb.
###dialogue: professor a: we 're going ? ok . sh - close your door on the way out ? professor a: probably wanna get this other door , too . so . what are we talking about today ? professor a: the both the sri system and the oth and for one thing that shows the difference between having a lot of training data or not , professor a: , the the best number we have on the english on near microphone only is three or four percent . professor a: and it 's significantly better than that , using fairly simple front - ends on the , with the sri system . so i th that the but that 's using a pretty huge amount of data , mostly not digits , but then again , . , mostly not digits for the actual training the h m ms whereas in this case we 're just using digits for training the h m professor a: did anybody mention about whether the sri system is a is doing the digits the wor as a word model or as a sub s sub - phone states ? phd e: . so , because it 's their very d huge , their huge system . and . but . so . there is one difference , the sri system the result for the sri system that are represented here are with adaptation . so there is it 's their complete system and including on - line unsupervised adaptation . phd e: and if you do n't use adaptation , the error rate is around fifty percent worse , if i remember . professor a: still . but but what i 'd be interested to do given that , is that we should take i that somebody 's gon na do this , right ? is to take some of these tandem things and feed it into the sri system , phd e: but i the main point is the data because i am not . our back - end is fairly simple but until now , the attempts to improve it or have fail , what chuck tried to do professor a: , but he 's doing it with the same data , right ? so to so there 's two things being affected . professor a: . one is that , there 's something simple that 's wrong with the back - end . we ' ve been playing a number of states i if he got to the point of playing with the number of gaussians yet but , but , so far he had n't gotten any big improvement , but that 's all with the same amount of data which is pretty small . and . professor a: , you could do that , but i ' m saying even with it not with that part not retrained , just using having the h m ms much better h m professor a: . but just train those h m ms using different features , the features coming from our aurora . phd e: but what would be interesting to see also is what perhaps it 's not related , the amount of data but the recording conditions . i . because it 's probably not a problem of noise , because our features are supposed to be robust to noise . it 's not a problem of channel , because there is normalization with respect to the channel . so professor a: i ' m . what what is the problem that you 're trying to explain ? phd e: the the fact that the result with the tandem and aurora system are so much worse . professor a: that the so much worse ? i but i ' m almost certain that it , that it has to do with the amount of training data . professor a: but but having a huge if if you look at what commercial places do , they use a huge amount of data . this is a modest amount of data . professor a: so . , ordinarily you would say " , given that you have enough occurrences of the digits , you can just train with digits rather than with , " but , if you have a huge in other words , do word models but if you have a huge amount of data then you 're going to have many occurrences of similar allophones . and that 's just a huge amount of training for it . so it 's it has to be that , because , as you say , this is near - microphone , it 's really pretty clean data . now , some of it could be the fact that let 's see , in the in these multi - train things did we include noisy data in the training ? , that could be hurting us actually , for the clean case . phd e: , actually we see that the clean train for the aurora proposals are better than the multi - train , professor a: it is if cuz this is clean data , and so that 's not too surprising . but . phd e: , o i what i meant is that , let 's say if we add enough data to train on the meeting recorder digits , i we could have better results than this . phd e: what i meant is that perhaps we can learn something from this , what 's wrong what is different between ti - digits and these digits and professor a: so in the actual ti - digits database we 're getting point eight percent , and here we 're getting three or four three , let 's see , three for this ? , but , point eight percent is something like double or triple what people have gotten who ' ve worked very hard at doing that . and and also , as you point out , there 's adaptation in these numbers also . so if you , put the ad adap take the adaptation off , then it for the english - near you get something like two percent . and here you had , something like three point four . and i could easily see that difference coming from this huge amount of data that it was trained on . so it 's , i do n't think there 's anything magical here . it 's , we used a simple htk system with a modest amount of data . and this is a , modern system has a lot of points to it . so . , the htk is an older htk , even . it 's not that surprising . but to me it just meant a practical point that if we want to publish results on digits that people pay attention to we probably should cuz we ' ve had the problem before that you get show some improvement on something that 's , it seems like too large a number , and people do n't necessarily take it so . so the three point four percent for this is so why is it it 's an interesting question though , still . why is why is it three point four percent for the d the digits recorded in this environment as opposed to the point eight percent for the original ti - digits database ? professor a: just looking at the ti - di the tandem system , if we 're getting point eight percent , which , yes , it 's high . it 's , it 's not awfully high , but it 's , it 's high . why is it four times as high , or more ? professor a: right ? , there 's even though it 's close - miked there 's still there really is background noise . and i suspect when the ti - digits were recorded if somebody fumbled or said something wrong that they probably made them take it over . it was not there was no attempt to have it be realistic in any sense . phd e: and acoustically , it 's q it 's i listened . it 's quite different . ti - digit is it 's very , very clean and it 's like studio recording whereas these meeting recorder digits sometimes you have breath noise professor a: bless you . i . it 's so . yes . it 's it 's the indication it 's harder . , i that 's true either way . so take a look at the , the sri results . , they 're much better , but still you 're getting something like one point three percent for things that are same data as in t ti - digits the same text . and , i ' m the same system would get , point three or point four on the actual ti - digits . so this , on both systems the these digits are showing up as harder . which i find interesting this is closer to it 's still read . but i still think it 's much closer to what people actually face , when they 're dealing with people saying digits over the telephone . i do n't think , i ' m they would n't release the numbers , but i do n't think that the companies that do telephone speech get anything like point four percent on their digits . i ' m they get , for one thing people do phone up who do n't have middle america accents and it 's a we it 's us . it has many people who sound in many different ways . that was that topic . what else we got ? did we end up giving up on , any eurospeech submissions , or ? i know thilo and dan ellis are submitting something , but . phd e: i e the only thing with these the meeting recorder and , so , we gave up . professor a: . now , actually for the aur - we do have for aurora , right ? because because we have ano an extra month . professor a: , that 's fine . so th so we have a couple little things on meeting recorder and we have we do n't we do n't have to flood it with papers . we 're not trying to prove anything to anybody . so . that 's fine . anything else ? phd e: . so . perhaps that we ' ve been working on is , we have put the good vad in the system and it really makes a huge difference . so , , this is perhaps one of the reason why our system was not the best , because with the new vad , it 's very the results are similar to the france telecom results and perhaps even better sometimes . so there is this point . the problem is that it 's very big and we still have to think how to where to put it and , because it , this vad either some delay and we if we put it on the server side , it does n't work , because on the server side features you already have lda applied from the f from the terminal side and so you accumulate the delay so the vad should be before the lda which means perhaps on the terminal side and then smaller and phd e: it 's from ogi . so it 's the network trained it 's the network with the huge amounts on hidden of hidden units , and nine input frames compared to the vad that was in the proposal which has a very small amount of hidden units and fewer inputs . professor a: this is the one they had originally ? , but they had to get rid of it because of the space , did n't they ? phd e: but the abso assumption is that we will be able to make a vad that 's small and that works fine . and . so we can professor a: but the other thing is to use a different vad entirely . , i if there 's a if i what the thinking was amongst the etsi folk but if everybody let 's use this vad and take that out of there phd e: they just want , they do n't want to fix the vad because they think there is some interaction between feature extraction and vad or frame dropping but they still want to just to give some requirement for this vad because it 's it will not be part of they do n't want it to be part of the standard . so it must be at least somewhat fixed but not completely . so there just will be some requirements that are still not yet ready . professor a: determined . but i was thinking that s " , there may be some interaction , but i do n't think we need to be stuck on using our or ogi 's vad . we could use somebody else 's if it 's smaller or , as long as it did the job . so that 's good . phd e: . so there is this thing . there is i designed a new filter because when i designed other filters with shorter delay from the lda filters , there was one filter with fif sixty millisecond delay and the other with ten milliseconds and hynek suggested that both could have sixty - five sixty - s it 's sixty - five . both should have sixty - five because phd e: and . so i did that and it 's running . so , let 's see what will happen . but the filter is closer to the reference filter . . professor a: so that means logically , in principle , it should be better . so probably it 'll be worse . or in the basic perverse nature of reality . phd e: , and then we ' ve started to work with this of voiced - unvoiced . and next week we will perhaps try to have a new system with msg stream also see what happens . so , something that 's similar to the proposal too , but with msg stream . phd d: no , i w i begin to play with matlab and to found some parameter robust for voiced - unvoiced decision . but only to play . and we they we found that maybe w is a classical parameter , the sq the variance between the fft of the signal and the small spectrum of time we after the mel filter bank . and , is more or less robust . is good for clean speech . is quite good for noisy speech . but we must to have bigger statistic with timit , and is not ready yet to use on , i . phd d: i have here . i have here for one signal , for one frame . the the mix of the two , noise and unnoise , and the signal is this . clean , and this noise . these are the two the mixed , the big signal is for clean . professor a: , i ' m s there 's none of these axes are labeled , so i what this what 's this axis ? phd d: , this is energy , log - energy of the spectrum . of the this is the variance , the difference { nonvocalsound } between the spectrum of the signal and fft of each frame of the signal and this mouth spectrum of time after the f may fit for the two , phd d: this big , to here , they are to signal . this is for clean and this is for noise . phd d: and this is the noise portion . and this is more or less like this . but i meant to have see @ two the picture . this is , for one frame . the spectrum of the signal . and this is the small version of the spectrum after ml mel filter bank . phd d: and this is this is not the different . this is trying to obtain with lpc model the spectrum but using matlab without going factor and s phd d: and the that this is good . this is quite similar . this is another frame . ho how i obtained the envelope , { nonvocalsound } this envelope , with the mel filter bank . professor a: so now i wonder , do you want to i know you want to get at something orthogonal from what you get with the smooth spectrum . but if you were to really try and get a voiced - unvoiced , do you want to ignore that ? , do you , clearly a very big cues for voiced - unvoiced come from spectral slope and so on , phd d: because when did noise clear { nonvocalsound } in these section is clear if s @ { nonvocalsound } val value is indicative that is a voice frame and it 's low values professor a: , you probably want , certainly if you want to do good voiced - unvoiced detection , you need a few features . each each feature is by itself not enough . but , people look at slope and first auto - correlation coefficient , divided by power . or or there 's i we prob probably do n't have enough computation to do a simple pitch detector ? with a pitch detector you could have a an estimate of what the or maybe you could you just do it going through the p fft 's figuring out some probable harmonic structure . and and . phd d: you have read up and you have a paper , the paper that you s give me yesterday . they say that yesterday they are some { nonvocalsound } problem phd e: , there is th this fact actually . if you look at this spectrum , what 's this again ? is it the mel - filters ? phd e: ok . so the envelope here is the output of the mel - filters and what we clearly see is that in some cases , and it clearly appears here , and the harmonics are resolved by the f , there are still appear after mel - filtering , and it happens for high pitched voice because the width of the lower frequency mel - filters is sometimes even smaller than the pitch . it 's around one hundred , one hundred and fifty hertz nnn . and so what happens is that this , add additional variability to this envelope so we were thinking to modify the mel - spectrum to have something that 's smoother on low frequencies . professor a: separate thing ? maybe so . so , what what i was talking about was just , starting with the fft you could do a very rough thing to estimate pitch . and , given that , you could come up with some estimate of how much of the low frequency energy was explained by those harmonics . it 's a variant on what you 're s what you 're doing . the , the mel does give a smooth thing . but as you say it 's not that smooth here . and and so if you just subtracted off your of the harmonics then something like this would end up with quite a bit lower energy in the first fifteen hundred hertz or so and our first kilohertz , even . and if was noisy , the proportion that it would go down would be if it was unvoiced . so you oughta be able to pick out voiced segments . at least it should be another cue . so . anyway . ok ? that 's what 's going on . what 's up with you ? grad b: our t i went to talk with mike jordan this week { nonvocalsound } and shared with him the ideas about extending the larry saul work and i asked him some questions about factorial h m so like later down the line when we ' ve come up with these feature detectors , how do we , model the time series that happens and we talked a little bit about factorial h m ms and how when you 're doing inference or w when you 're doing recognition , there 's like simple viterbi that you can do for these h m and the great advantages that a lot of times the factorial h m ms do n't over - alert the problem there they have a limited number of parameters and they focus directly on the sub - problems at hand so you can imagine five or so parallel features transitioning independently and then at the end you couple these factorial h m ms with undirected links based on some more data . so he seemed like really interested in this and said this is something very do - able and can learn a lot and , i ' ve just been continue reading about certain things . thinking of maybe using m modulation spectrum to as features also in the sub - bands because it seems like the modulation spectrum tells you a lot about the intelligibility of certain words and so , . just that 's about it . grad c: ok . and so i ' ve been looking at avendano 's work and i 'll try to write up in my next stat status report a description of what he 's doing , but it 's an approach to deal with reverberation or that the aspect of his work that i ' m interested in the idea is that normally an analysis frames are too short to encompass reverberation effects in full . you miss most of the reverberation tail in a ten millisecond window and so you 'd like it to be that the reverberation responses simply convolved in , but it 's not really with these ten millisecond frames cuz you j but if you take , say , a two millisecond window i ' m a two second window then in a room like this , most of the reverberation response is included in the window and the then it then things are l more linear . it is it is more like the reverberation response is simply c convolved and you can use channel normalization techniques like in his thesis he 's assuming that the reverberation response is fixed . he just does mean subtraction , which is like removing the dc component of the modulation spectrum and that 's supposed to d deal pretty with the reverberation and the neat thing is you ca n't take these two second frames and feed them to a speech recognizer so he does this method training trading the spectral resolution for time resolution and come ca synthesizes a new representation which is with say ten second frames but a lower s frequency resolution . so i do n't really know the theory . i it 's these are called " time frequency representations " and h he 's making the time sh finer grained and the frequency resolution less fine grained . s so i ' m i my first stab actually in continuing his work is to re - implement this thing which changes the time and frequency resolutions cuz he does n't have code for me . so that 'll take some reading about the theory . i do n't really know the theory . , and , another f first step is , so the way i want to extend his work is make it able to deal with a time varying reverberation response we do n't really know how fast the reverberation response is varying the meeting recorder data so we have this block least squares imp echo canceller implementation and i want to try finding the response , say , between a near mike and the table mike for someone using the echo canceller and looking at the echo canceller taps and then see how fast that varies from block to block . that should give an idea of how fast the reverberation response is changing . grad c: s so y you do you read some of the zeros as o 's and some as zeros . is there a particular way we 're supposed to read them ? professor a: no . " o " o " o " and " zero " are two ways that we say that digit . phd e: perhaps in the sheets there should be another sign for the if we want to the guy to say " o " or professor a: no . people will do what they say . it 's ok . in digit recognition we ' ve done before , you have two pronunciations for that value , " o " and " zero " . phd e: but it 's perhaps more difficult for the people to prepare the database then , if because here you only have zeros professor a: they write down oh . or they write down zero a and they each have their own pronunciation . phd e: but if the sh the sheet was prepared with a different sign for the " o " . professor a: but people would n't that wa there is no convention for it . see . , you 'd have to tell them " ok when we write this , say it tha " , and you just they just want people to read the digits as you ordinarily would and people say it different ways . grad c: is this a change from the last batch of forms ? because in the last batch it was spelled out which one you should read . professor a: yes . that 's right . it was it was spelled out , and they decided they wanted to get at more the way people would really say things . professor a: that 's also why they 're bunched together in these different groups . so so it 's so it 's everything 's fine . actually , let me just s since you brought it up , i was just it was hard not to be self - conscious about that when it after we since we just discussed it . but i realized that when i ' m talking on the phone , certainly , and saying these numbers , i almost always say zero . and cuz because i it 's two syllables . it 's it 's more likely they 'll understand what i said . so that 's the habit i ' m in , but some people say " o " and ###summary: the main purpose of the meeting of icsi's meeting recorder group at berkeley was to discuss the recent progress of it's members. this includes reports on the progress of the groups main digit recogniser project , with interest on voice-activity detectors and voiced/unvoiced detection , work on acoustic feature detection , and research into dealing with reverberation. there was also talk of comparing different recognition systems and training datasets , and a discussion of the pronunciation of the digit zero for the recording at the end of the meeting. in his next status report , me026 will summarise the work he has been researching. the digit recognition system is still not working well enough , they must get better results if they want to publish and be noticed. they have not really made many improvements , which may be due to their comparatively small training set , or the conditions the data is recorded under. the new vad is quite a large network , and adds a delay to the process. this caused ogi to drop it , though speaker mn007 is assuming that a smaller and equally effective system can be developed. the alternative is to get yet another vad form somewhere else , though it's not clear if they will even be required in the final system. there are some problems with the voiced/unvoiced feature detection , because some pitches are slipping through the filtering. the group have been comparing their recognition system to a few others , and theirs has not come off favourably. there could be many reasons for this , including smaller training set , more realistic data , or older technology. speaker mn007 has put the best voice activity detector into the system , to great improvements along with designing new filters that run at the correct latency. speaker fn002 has started to find parameters for voiced/unvoiced feature detection , and has found some classic ones , although there are other things she wishes to look at. me013 offers a few ideas of simple things she may want to try , as he is not confident with everything she is trying. speaker me006 is continuing with the idea of extending work on acoustic feature detection. he is continuing to read , and has discussed the suitability of factorial hmms with a colleague. speaker me026 has been learning more about previous work on reverberation , and is ready to start with a re-implementation of the theory. from there he wants to extend the work to look at time-varying reverb.
1
grad a: ok , we 're on . so just make that th your wireless mike is on , if you 're wearing a wireless . grad a: and you should be able to see which one you 're on by , watching the little bars change . grad a: so , actually , if you guys wanna go ahead and read digits now , as long as you ' ve signed the consent form , that 's alright . grad a: each individually . we 're talking about doing all at the same time but cognitively that would be really difficult . to try to read them while everyone else is . grad a: so , when you 're reading the digit strings , the first thing to do is just say which transcript you 're on . professor c: other way . we m we may wind up with ver we we may need versions of all this garbage . grad a: . so the first thing you 'd wanna do is just say which transcript you 're on . so . you can see the transcript ? there 's two large number strings on the digits ? so you would just read that one . and then you read each line with a small pause between the lines . and the pause is just so the person transcribing it can tell where one line ends and the other begins . and i 'll give i 'll read the digit strings first , so can see how that goes . again , i ' m not how much i should talk about before everyone 's here . grad a: ok . , why do n't i go ahead and read digit strings and then we can go on from there . grad a: so , just also a note on wearing the microphones . all of you look like you 're doing it reasonably correctly , but you want it about two thumb widths away from your mouth , and then , at the corner . and that 's so that you minimize breath sounds , so that when you 're breathing , you do n't breathe into the mike . , that 's good . and so , everyone needs to fill out , only once , the speaker form and the consent form . and the short form , you should read the consent form , but , the thing to notice is that we will give you an opportunity to edit a all the transcripts . so , if you say things and you do n't want them to be released to the general public , which , these will be available at some point to anyone who wants them , you 'll be given an opportunity by email , to bleep out any portions you do n't like . on the speaker form just fill out as much of the information as you can . if you 're not exactly about the region , we 're not exactly either . so , do n't worry too much about it . the it 's just self rating . and that 's about it . , should i do you want me to talk about why we 're doing this and what this project is ? professor c: whether she knows is another question . so are the people going to be identified by name ? grad a: , what we 're gon na we 'll anonymize it in the transcript . , but not in the audio . professor c: so , then in terms of people worrying about , excising things from the transcript , it 's unlikely . since it does is n't attributed . , i see , but the a but the grad a: right , so if i said , " , hi jerry , how are you ? " , we 're not gon na go through and cancel out the " jerry " s . , so we will go through and , in the speaker id tags there 'll be , m - one o seven , m - one o eight . , but , it w , i a good way of doing it on the audio , and still have people who are doing discourse research be able to use the data . grad a: but we find that we want the meeting to be as natural as possible . so , we 're trying to do real meetings . and so we do n't wanna have to do aliases and we do n't want people to be editing what they say . grad a: so that it 's better just as a pro post - process to edit out every time you bash microsoft . grad a: . so this is the project is called meeting recorder and there are lots of different aspects of the project . so my particular interest is in the pda of the future . this is a mock - up of one . yes , we do believe the pda of the future will be made of wood . the idea is that you 'd be able to put a pda at the table at an impromptu meeting , and record it , and then be able to do querying and retrieval later on , on the meeting . so that 's my particular interest , is a portable device to do m , information retrieval on meetings . other people are interested in other aspects of meetings . so the first step on that , in any of these , is to collect some data . and so what we wanted is a room that 's instrumented with both the table top microphones , and these are very high quality pressure zone mikes , as the close talking mikes . what the close talk ng talking mikes gives us is some ground truth , gives us , high quality audio , especially for people who are n't interested in the acoustic parts of this corpus . so , for people who are more interested in language , we did n't want to penalize them by having only the far field mikes available . and then also , it 's a very , very hard task in terms of speech recognition . and so , on the far field mikes we can expect very low recognition results . so we wanted the near field mikes to at least isolate the difference between the two . so that 's why we 're recording in parallel with the close talking and the far field at the same time . and then , all these channels are recorded simultaneously and framed synchronously so that you can also do things like , beam - forming on all the microphones and do research like that . our intention is to release this data to the public , probably through f through a body like the ldc . and , just make it as a generally available corpus . there 's other work going on in meeting recording . so , we 're working with sri , with uw , . nist has started an effort which will include video . we 're not including video , . and and then also , a small amount of assistance from ibm . is also involved . , and the digit strings , this is just a more constrained task . so because the general environment is so challenging , we decided to do at least one set of digit strings to give ourselves something easier . and it 's exactly the same digit strings as in ti - digits , which is a common connected digits corpus . so we 'll have some , comparison to be able to be made . anything else ? grad a: ok , so when the l last person comes in , just have them wear a wireless . it should be on already . either one of those . and , read the digit strings and fill out the forms . so , the most important form is the consent form , so just be s be everyone signs that , if they consent . grad b: i ' m it 's pretty usual for meetings that people come late , so you will have to leave what you set . grad a: and , just give me a call , which , my number 's up there when your meeting is over . and i ' m going to leave the mike here but it 's n { nonvocalsound } , but i ' m not gon na be on so do n't have them use this one . it 'll just be sitting here . professor c: , adam , we will be using the , screen as . so , . ! organization . so you guys who got email about this f , friday about what we 're up to . professor c: , this was about , inferring intentions from features in context , and the words , like " s go to see " , or " visit " , or some professor c: i these g have got better filters . cuz i sent it to everybody . you just blew it off . grad b: it 's really simple though . so this is the idea . we could pursue , if we thought it 's worth it but , we will agree on that , to come up with a very , very first crude prototype , and do some implementation work , and do some research , and some modeling . so the idea is if you want to go somewhere , and focus on that object down , actually walk with this . this is . down here . that 's the powder - tower . now , we found in our , data and from experiments , that there 's three things you can do . , you can walk this way , and come really , really close to it . and touch it . but you can not enter or do anything else . unless you 're interested in rock climbing , it wo n't do you no good standing there . it 's just a dark alley . but you can touch it . if you want to actually go up or into the tower , you have to go this way , and then through some buildings and up some stairs and . if you actually want to see the tower , and that 's what actually most people want to do , is just have a good look of it , take a picture for the family , you have to go this way , and go up here . and there you have a vre really view it exploded , the during the thirty years - war . really , interesting sight . and , these lines are , paths , grad b: or so that 's ab er , i the street network of our geographic information system . and you can tell that we deliberately cut out this part . because otherwise we could n't get our gis system to take to lead people this way . it would always use the closest point to the object , and then the tourists would be faced , in front of a wall , but it would do them no good . so , what we found interesting is , first of all , intentions differ . maybe you want to enter a building . maybe you want to see it , take a picture of it . or maybe you actually want to come as close as possible to the building . for whatever reason that may be . grad b: , maybe you would want to touch it . , i this , these intentions , we w we could , if we want to , call it the vista mode , where we just want to s get the overview or look at it , the enter mode , and the , tango mode . i always come up with silly names . so this " tango " means , literally translated , " to touch " . so but sometimes the tango mode is really relevant in the sense that , if you want to , if you do n't have the intention of entering your building , but that something is really close to it , and you just want to approach it , or get to that building . consider , the post office in chicago , a building so large that it has its own zip code . so the entrance could be miles away from the closest point . so sometimes it m makes sense maybe to d to distinguish there . so , i ' ve looked , through twenty some , i did n't look through all the data . , and there 's , a lot more different ways in people , the ways people phrase how to g get if they want to get to a certain place . and sometimes here it 's b it 's a little bit more obvious . maybe i should go back a couple of steps and go through the professor c: no , ok come in , sit down . if you grab yourself a microphone . grad b: ok . that was the idea . , people , when they w when they want to go to a building , sometimes they just want to look at it . sometimes they want to enter it . and sometimes they want to get really close to it . that 's something we found . it 's just a truism . and the places where you will lead them for these intentions are sometimes ex in incredibly different . i gave an example where the point where you end up if you want to look at it is completely different from where if you want to enter it . so , this is how people may , may phrase those requests to a mock - up system at least that 's the way they did it . and we get tons of these " how do i get to " , " i want to go to " , but also , " give me directions to " , and " i would like to see " . and , what we can do , if we look closer a closer at the data that was the wrong one . , we can look at some factors that may make a difference . first of all , very important , and , that i ' ve completely forgot that when we talked . this is a crucial factor , " what type of object is it ? " so , some buildings you just do n't want to take pictures of . or very rarely . but you usually want to enter them . some objects are more picturesque , and you more f more highly photographed . then the actual phrases may give us some idea of what the person wants . sometimes i found in the , looking at the data , in a superficial way , i found some s modifiers that m may also give us a hint , " i ' m trying to get to " nuh ? i need to get to . hints to the fact that you 're not really sightseeing and just f there for pleasure and so on . and this leads us straight to the context which also should be considered . that whatever it is you 're doing at the moment may also inter influence the interpretation of a phrase . so , this is , really , my suggestion is really simple . we start with , now , let me , say one more thing . what we do know , is that the parser we use in the smartkom system will never differentiate between any of these . so , all of these things will result in the same xml m - three - l structure . action " go " , and then an object . grad b: ? and a source . so it 's way too crude to d capture those differences in intentions . so , " mmm ! maybe for a deep understanding task , that 's a playground or first little thing . " where we can start it and n look ok , we need , we gon na get those m - three - l structures . bed002bdialogueact345 114153 114371 b grad s -1 0 the crude , undifferentiated parse . bed002bdialogueact346 114435 114555 b grad s^e -1 0 interpreted input . bed002bdialogueact347 114672 115529 b grad s -1 0 we may need additional part of speech , or maybe just some information on the verb , and modifiers , auxiliaries . bed002bdialogueact348 115586 11563 b grad s^rt -1 0 we 'll see . bed002bdialogueact349 115703 116124 b grad s^cc^rt +1 and i will try to come up with a list of factors that we need to get out of there , bed002bdialogueact350 116187 116562 b grad s +1 0 and maybe we want to get a g switch for the context . bed002bdialogueact351 116596 117036 b grad s -1 0 so this is not something which we can actually monitor , now , bed002bdialogueact352 117097 117246 b grad s -1 0 but just is something we can set . bed002bdialogueact353 117379 118088 b grad s -1 0 and then you can all imagine a constrained satisfaction program , depending on what , comes out . bed002bdialogueact354 11813 118438 b grad s%-- -1 0 we want to have an a structure resulting bed002bdialogueact355 118486 119762 b grad s +1 if we feed it through a belief - net along those lines . we 'd get an inferred intention , we produce a structure that differentiates between the vista , the enter , and the , tango mode . bed002bdialogueact356 119814 12011 b grad s -1 0 which we maybe want to ignore . bed002bdialogueact357 12012 120304 b grad s^rt -1 0 but . that 's my idea . bed002bdialogueact358 120322 120402 b grad s -1 0 it 's up for discussion . bed002bdialogueact359 120462 120586 b grad s -1 0 we can change all of it , bed002bdialogueact360 120586 120679 b grad s -1 0 any bit of it . bed002bdialogueact361 120869 120931 b grad s -1 0 throw it all away . bed002fdialogueact362 121157 121305 f grad s -1 0 now @ this email that you sent , actually . bed002cdialogueact363 12131 121335 c professor qw^br -1 0 what ? bed002fdialogueact364 121345 12143 f grad s -1 0 now i remember the email . bed002cdialogueact365 121477 121511 c professor s^bk -1 0 ok . bed002edialogueact366 121617 121644 e grad b -1 0 huh . bed002edialogueact367 121669 122029 e grad s -1 0 still , i have no recollection whatsoever of the email . bed002edialogueact368 122029 12214 e grad s -1 0 i 'll have to go back and check . bed002cdialogueact369 12218 122244 c professor s^ba^bd -1 0 not important . bed002cdialogueact370 122364 123063 c professor s -1 0 so , what is important is that we understand what the proposed task is . bed002cdialogueact371 123198 123574 c professor s -1 0 and , the i , robert and i talked about this some on friday . bed002cdialogueact372 123684 123973 c professor fh|s -1 0 and we think it 's - formed . bed002cdialogueact373 124027 124884 c professor s +1 2 so we think it 's a - formed , starter task for this , deeper understanding in the tourist domain . bed002fdialogueact374 125295 125527 f grad qw -1 0 so , where exactly is the , deeper understanding being done ? bed002fdialogueact375 125527 12571 f grad qy -1 0 like , s is it before the bayes - net ? bed002fdialogueact376 12571 125767 f grad qy%-- -1 0 is it , bed002cdialogueact377 125776 126026 c professor s -1 0 well , it 's the it 's always all of it . bed002cdialogueact378 126049 126429 c professor s -1 0 so , in general it 's always going to be , the answer is , everywhere . bed002cdialogueact379 126495 126966 c professor s -1 0 uh , so the notion is that , this is n't real deep . bed002cdialogueact380 127005 127699 c professor s -1 0 but it 's deep enough that you can distinguish between these th three quite different kinds of , going to see some tourist thing . bed002cdialogueact381 127794 128162 c professor s -1 0 and , so that 's the quote deep " that we 're trying to get at . professor c: and , robert 's point is that the current front - end does n't give you any way to not only does n't it do it , but it also does n't give you enough information to do it . it is n't like , if you just took what the front - end gives you , and used some clever inference algorithm on it , you would be able to figure out which of these is going on . so , and this is bu - i in general it 's gon na be true of any deep understanding , there 's gon na be contextual things , there 're gon na be linguistic things , there 're gon na be discourse things , and they got ta be combined . and , my idea on how to combine them is with a belief - net , although it may turn out that t some different thing is gon na work better . , the idea would be that you , take your you 're editing your slide ? grad b: . as i a , as i get ideas , w . so , discourse about that . that needs to go in there . professor c: . i ' m . ok . so . this is minutes taking minutes as we go , in his own way . , but the p the anyway . so , i , d naively speaking , you ' ve got a for this little task , a belief - net , which is going to have as output , the conditional pr probability of one of three things , that the person wants to , to view it , to enter it , or to tango with it . so that the output of the belief - net is pretty formed . and , then the inputs are going to be these kinds of things . and , then the question is there are two questions is , one , where do you get this i information from , and two , what 's the structure of the belief - net ? so what are the conditional probabilities of this , that , and the other , given these things ? and you probably need intermediate nodes . i we what they are yet . so it may be that , , that , knowing whether , another thing you want is some information abou , about the time of day . now , they may wanna call that part of context . but the time of day matters a lot . and , if things are closed , then , you professor c: pe - people do n't wanna enter them . and , if it 's not obvious , you may want to actually , point out to people that it 's closed , what they 're g going to is closed and they do n't have the option of entering it . professor c: so another thing that can come up , and will come up as soon as you get serious about this is , that another option is to have a more of a dialogue . so if someone says something you could ask them . and now , one thing you could do is always ask them , but that 's boring . and it also w it also be a pain for the person using it . so one thing you could do is build a little system that , said , " whenever you got a question like that i ' ve got one of three answers . ask them which one you want . " but that 's , not what we 're gon na do . grad b: but maybe that 's a false state of the system , that it 's too close to call . professor c: you want the you want the ability to a you want the ability to ask , but what you do n't wanna do is onl build a system that always asks every time , and i that 's not getting at the scientific problem , and it 's in general you 're , it 's gon na be much more complex than that . a this is purposely a really simple case . so , grad b: i have one more point to bhaskara 's question . , also the deep understanding part of it is going to be in there to the extent that we , want it in terms of our modeling . we can start , basic from human beings , model that , its motions , going , walking , seeing , we can mem model all of that and then compose whatever inferences o we make out of these really conceptual primitives . that will be extremely deep in the in my understanding . professor c: . s so the way that might come up , if you wanna suppose you wanted to do that , you might say , " , as an intermediate step in your belief - net , is there a source - path - goal schema involved ? " ok ? and if so , is there a focus on the goal ? or is there a focus on the path ? and that could be , one of the conditiona , th the in some piece of the belief - net , that could be the appropriate thing to enter . grad f: so , where would we extract that information from ? from the m - three - l ? professor c: no . see , the m - three - l is not gon na give th what he was saying is , the m - three - l does not have any of that . all it has is some really crude saying , " a person wants to go to a place . " professor c: m - three , m - three - l itself refers to multimedia mark - up language . professor c: so we have th w we have to have a better w way of referring to grad b: really , oder o th no , actually , intention lattices is what we 're gon na get . professor c: so , th they 're gon na give us some cr or we can assume that y you get this crude information . about intention , and that 's all they 're going to provide . and they do n't give you the object , they do n't give you any discourse history , if you want to keep that you have to keep it somewhere else . grad b: , they keep it . we have to request it . but it 's not in there . professor c: , they kee they keep it by their lights . it may it may or may not be what we want . grad e: so , if someone says , " i wanna touch the side of the powder - tower " , that would , we need to pop up tango mode and the directions ? professor c: if i if , if it got as simple as that , . but it would n't . grad e: but that does n't necessarily but we 'd have to infer a source - path - goal to some degree for touching the side , right ? grad b: , th the there is a p a point there if i understand you . correct ? because , sometimes people just say things this you find very often . where is the city hall ? and this do they do n't wanna sh see it on a map , or they do n't wanna know it 's five hundred yards away from you , or that it 's to the your north . they wanna go there . that 's what they say , is , " where is it ? " . where is that damn thing ? grad b: , that 's a question mark . sh a lot of parsers , just , that 's way beyond their scope , is of interpreting that . but , still outcome w the outcome will be some form of structure , with the town hall and maybe saying it 's a wh focus on the town hall . but to interpret it , somebody else has to do that job later . grad e: i ' m just trying to figure out what the smartkom system would output , depending on these things . grad b: , it will probably tell you how far away it is , at least that 's that 's even what deep map does . it tells you how far away it is , and shows it to you on a map . because i we can not differentiate , at the moment , between , the intention of wanting to go there or the intention of just know wanting to know where it is . grad d: people no might not be able to infer that either , right ? like the fact like , i could imagine if someone came up to me and asked , " where 's the city hall ? " , i might say , g ar " are you trying to get there ? " because how i describe , t its location , p probably depend on whether i should give them , directions now , or say , whatever , " it 's half a mile away " like that . grad b: it 's a granularity factor , because where people ask you , " where is new york ? " , you will tell them it 's on the east coast . grad b: y y you wo n't tell them how to get there , ft , take that bus to the airport and blah - blah . but if it 's the post office , you will tell them how to get there . so th they have done some interesting experiments on that in hamburg as . so . grad d: so that context was like , their presumed purpose context , i like business or travel , as the utterance context , like , " i ' m now standing at this place at this time " . professor c: , we ought to d a as we have all along , d we we ' ve been distu distinguishing between situational context , which is what you have as context , and discourse context , which you have as dh , professor c: whatever . so we can work out terminology later . so , they 're quite distinct . , you need them both , but they 're quite distinct . and , so what we were talking about doing , a as a first shot , is not doing any of the linguistics . except to find out what seems to be useful . so , the reason the belief - net is in blue , is the notion would be , this may be a bad dis bad idea , but the idea is to take as a first goal , see if we could actually build a belief - net that would make this three way distinction , in a plausible way , given these we have all these transcripts and we 're able to , by hand , extract the features to put in the belief - net . saying , " aha ! here 're the things which , if you get them out of the language and discourse , and put them into the belief - net , it would tell you which of these three , intentions is most likely . " and if to actually do that , build it , run it y run it on the data where you hand - transcribe the parameters . and see how that goes . if that goes , then we can start worrying about how we would extract them . so where would you get this information ? and , expand it to other things like this . but if we ca n't do that , then we 're in trouble . th i if you ca n't do this task , professor c: , , . it i i if it 's the belief - nets , we 'll switch to , logic or some terrible thing , but i do n't think that 's gon na be the case . that , if we can get the information , a belief - net is a perfectly good way of doing the inferential combination of it . the real issue is , do what are the factors involved in determining this ? and i . grad d: , i missed the beginning , but , i could you back to the slide , the previous one ? so , is it that it 's , these are all factors that , a these are the ones that you said that we are going to ignore now ? or that we want to take into account ? you were saying n professor c: how to extract them . so , f let 's find out which ones we need first , grad d: and and it 's clear from the data , like , sorta the correct answer in each case . but l professor c: let 's go back to th let 's go back to the slide of data . grad d: that 's that 's the thing i ' m curious ab like do we know from the data wh which so grad b: not from that data . but , since we are designing a an , compared to this , even bigger data collection effort , we will definitely take care to put it in there , in some shape , way , form over the other , to see whether we can , then , get empirically validated data . , from this , we can sometimes , an and that 's that but that is n't that what we need for a belief - net anyhow ? is s sometimes when people want to just see it , they phrase it more like this ? but it does n't exclude anybody from phrasing it differently , even if they still but then other factors may come into play that change the outcome of their belief - net . so , this is exactly what because y you can never be . and i ' m even i the most , deliberate data collection experiment will never give you data that say , " , if it 's phrased like that , the intention is this . " , because then , you grad d: u , the only way you could get that is if you were to give th the x subjects a task . right ? where you have where your , current goal is to grad b: we ! that 's what we 're doing . but but we will still get the phrasing all over the place . grad d: the no , that 's fine . i , it 's just knowing the intention from the experimental subject . professor c: from that task , . so , you all know this , but we are going to actually use this little room and start recording subjects probably within a month . so , this is not any lo any of you guys ' worry , except that we may want to push that effort to get information we need . so our job is to figure out how to solve these problems . if it turns out that we need data of a certain sort , then the data collection branch can be , asked to do that . and one of the reasons why we 're recording the meeting for these guys is cuz we want their help when we d we start doing , recording of subjects . so , y you 're right , though . no , you will not have , and there it is , and , but , y the , grad d: and the other concern that has come up before , too , is if it 's i if this was collected what situation this data was collected in . was it is it the one that you showed in your talk ? like people grad d: but so was this , like , someone actually mobile , like s using a device ? grad b: n no , no not i it was mobile but not with a w a real wizard system . so there were never answers . grad d: ok . but , is it i the situation of collecting th the data of , like here you could imagine them being walking around the city . as like one situation . and then you have all sorts of other c situational context factors that would influence w how to interpret , like you said , the scope and things like that . if they 're doing it in a , " i ' m sitting here with a map and asking questions " , i would imagine that the data would be really different . , so it 's just grad b: but it was never th the goal of that data collection to serve for sat for such a purpose . so that 's why the tasks were not differentiated by intentionality , there was n there was no label , intention a , intention b , intention c . or task a , b , c . i ' m we can produce some if we need it , that will help us along those lines . but , you got ta leave something for other people to model . so , to finding out what , situational con what the contextual factors of the situation really are , is an interesting s interesting thing . u i ' m , at the moment , curious and i ' m s w want to approach it from the end where we can s start with this toy system that we can play around with , so that we get a clearer notion of what input we need for that , what suffices and what does n't . and then we can start worrying about where to get this input , what do we need , ultimately once we are all experts in changing that parser , maybe , there 's just a couple three things we need to do and then we get more whatever , part of speech and more construction - type - like out of it . it 's a m pragmatic approach , at the moment . grad e: how exactly does the data collection work ? do they have a map , and then you give them a scenario of some sort ? grad b: ok . imagine you 're the subject . you 're gon na be in here , and somebody and and you see , either th the three - d model , or , a quicktime animation of standing u in a square in heidelberg . so you actually see that . the , first thing is you have to read a text about heidelberg . so , just off a textbook , tourist guide , to familiarize , yourself with that odd - sounding german street names , like fischergasse and . so that 's part one . part two is , you 're told that this huge new , wonderful computer system exists , that can y tell you everything you want to know , and it understands you completely . and so you 're gon na pick up that phone , dial a number , and you get a certain amount of tasks that you have to solve . first you have to know find out how to get to that place , maybe with the intention of buying stamps in there . maybe so , the next task is to get to a certain place and take a picture for your grandchild . the third one is to get information on the history of an object . the fourth one and then the g system breaks down . it crashes , and grad b: after the third task . and then or after the fourth . some find @ forget that for now . and then , a human operator comes on , and exp apologizes that the system has crashed , but , urges you to continue , ? now with a human operator . and so , you have the same tasks again , just with different objects , and you go through it again , and that was it . , and one little bit w and , the computer you are being told the computer system knows exactly where you are , via gps . when the human operator comes on , that person does not know . so the gps is crashed as . so the person first has to ask you " where are you ? " . and so you have to do some s tell the person where you are , depending on what you see there . , this is a bit that i d i do n't think we did we discuss that bit ? , squeezed that in now . but it 's something , that would provide some very interesting data for some people i know . grad d: so , in the display you can , you said that you cou you might have a display that shows , like , the grad d: so as you move through it that 's - they just track it on the for themselves grad b: i . i but y i do n't think you really move , that would be an enormous technical effort , unless we would we can show it walks to , . we can have movies of walking , you walking through heidelberg , and u ultimately arriving there . maybe we wanna do that . grad b: the map was intended to you want to go to that place . , and it 's there . and you see the label of the name so we get those names , pronunciation , and we can change that . grad d: so your tasks do n't require you to , yo you 're told so when your task is , i , " go buy stamps " like that ? so , do you have to respond ? or does your , what are you ste what are you supposed to be telling the system ? like , w what you 're doing now ? or grad d: there 's no ok , so it 's just like , " let 's figure out what they would say under the circumstances " . grad b: , and we will record both sides . , we will record the wi - the wizard , in both cases it 's gon na be a human , in the computer , and in the operator case . and we will re there will be some dialogue , ? so , you first have to do this , and that , and see wh what they say . we can ins instruct the , wizard in how expressive and talkative he should be . maybe the maybe what you 're suggesting is what you 're suggesting that it might be too poor , the data , if we limit it to this ping pong one t , task results in a question and then there 's an answer and that 's the end of the task ? you wanna m have it more steps , ? grad d: , i how much direction is given to the subject about what their interaction , th they 're unfamiliar w with interacting with the system . all they know is it 's this great system that could do s . professor c: , but to some extent this is a different discussion . so . , we have to have this discussion of th the experiment , and the data collection , and all that sorta and we do have , a student who is a candidate for wizard . , she 's gon na get in touch with me . it 's a student of eve 's . fey , fey ? spelled fey . do you do you grad d: she started taking the class last year and then did n't , did n't continue . i g she 's a g grad d: is she an undergradua she is a graduate , ok . , i m i know her very , very briefly . i know she was inter , interested in aspect and like that . professor c: so , anyway , she 's looking for some more part time work w while she 's waiting actually for graduate school . and she 'll be in touch . so we may have someone , to do this , and she 's got , some background in all this . and is a linguist st and , so so . that 's so , nancy , we 'll have an at some point we 'll have another discussion on exactly wha t , how that 's gon na go . and , jane , but also , liz have offered to help us do this , data collection and design and . professor c: so , when we get to that we 'll have some people doing it that they 're doing . grad d: i the reason i was asking about the de the details of this thing is that , it 's one thing to collect data for , i , speech recognition or various other tasks that have pretty c clear correct answers , but with intention , as you point out , there 's a lot of di other factors and i ' m not really , how e the question of how to make it a t appropriate toy version of that , it 's ju it 's just hard . so , it 's a grad e: , actually i that was my question . is the intention implicit in the scenario that 's given ? like , do the professor c: so , we that 's part of what we 'll have to figure out . but , the the problem that i was tr gon na try to focus on today was , let 's suppose by magic you could collect dialogues in which , one way or the other , you were able to , figure out both the intention , and set the context , and language was used . so let 's suppose that we can get that data . the issue is , can we find a way to , featurize it so that we get some discrete number of features so that , when we know the values to all those features , or as many as possible , we can w come up with the best estimate of which of the , in this case three little intentions , are most likely . grad d: w what are the t three intentions ? is it to go there , to see it , and professor c: go back . to v to view it . to enter it . now those it seems to me those are cl you c you have no trouble with those being distinct . take a picture of it you might want to be a really rather different place than entering it . and , for an object that 's big , getting to the nearest part of it , could be quite different than either of those . just grad d: ok , so now i understand the referent of tango mode . i did n't get that before . grad d: , like , how close are you gon na be ? like , tango 's really close . grad f: all these so , like , the question is how what features can like , do you wanna try to extract from , say , the parse or whatever ? like , the presence of a word or the presence of a certain , stem , or certain construction or whatever . professor c: is there a construction , or the object , or w , anything else that 's in the si it 's either in the s the discourse itself or in the context . so if it turns out that , whatever it is , you want to know whether the person 's , a tourist or not , ok ? that becomes a feature . now , how you determine that is another issue . but fo for the current problem , it would just be , " ok , if you can be that it 's a tourist , versus a businessman , versus a native , " , that would give you a lot of discriminatory power and then just have a little section in your belief - net that said , " pppt ! " though sin f in the short run , you 'd set them , professor c: and see ho how it worked , and then in the longer run , you would figure out how you could derive them . from previous discourse or w any anything else you knew . grad f: so , how should what 's the , plan ? like , how should we go about figuring out these professor c: ok . so , first of all is , do e either of you guys , you got a favorite belief - net that you ' ve , played with ? javabayes ? professor c: , anyway . f get one . ok ? so y so one of th one of the things we wanna do is actually , pick a package , does n't matter which one , presumably one that 's got good interactive abilities , cuz a lot of what we 're gon na be d , we do n't need the one that 'll solve massive , belief - nets quickly . d w these are not gon na get big in the foreseeable future . but we do want one in which it 's easy to interact with and , modify . because i that 's a lot of what it 's gon na be , is , playing with this . and probably one in which it 's easy to have , what amounts to transcript files . so that if we have all these cases so we make up cases that have these features , ok , and then you 'd like to be able to say , " ok , here 's a bunch of cases " there 're even ones tha that you can do learning ok ? so you have all their cases and their results and you have a algorithms to go through and run around trying to set the probabilities for you . , probably that 's not worth it . , my is we are n't gon na have enough data that 's good enough to make the these data fitting ones worth it , but i . so i would say you guy the first task for you two guys is to , pick a package . ok , and you wanna it s , the standard things you want it stable , you want it , @ . and , as soon as we have one , we can start trying to , make a first cut at what 's going on . professor c: but it what i like about it is it 's very concrete . ok ? we we have a we the outcomes are gon na be , and we have some data that 's loose , we can use our own intuition , and see how hard it is , and , importantly , what intermediate nodes we think we need . so it if it turns out that just , thinking about the problem , you come up with things you really need to , this is the thing that is , an intermediate little piece in your belief - net . that 'd be really interesting . grad b: and it and it may serve as a platform for a person , maybe me , or whoever , who is interested in doing some linguistic analysis . , w we have the for - framenet group here , and we can see what they have found out about those concepts already , that are contained in the data , , to come up with a little set of features and , maybe even means of s , extracting them . and and that altogether could also be , become a paper that 's going to be published somewhere , if we sit down and write it . when you said javabayes belief - net you were talking about ones that run on coffee ? or that are in the program language java ? professor c: no , th it turns out that there is a , the new end of java libraries . ok , and it turns out one called professor c: which is one that fair people around here use a fair amount . i have no idea whether that 's the obvious advantage of that is that you can then , relatively easily , get all the other java packages for guis or whatever else you might want to do . so that i that 's why a lot of people doing research use that . but it may not be i have no idea whether that 's the best choice an and there 're plenty of people around , students in the department who , live and breathe bayes - nets . professor c: nancy knows him . i whether you guys have met kevin yet or not , but , grad b: but i but since we all probably are pretty that , the , this th the dialogue history is , producing xml documents . m - three - l is xml . and the ontology that , the student is constructing for me back in eml is in oil and that 's also in xml . and so that 's where a lot of knowledge about bakeries , about hotels , about castles and is gon na come from . , so , if it has that io capability and if it 's a java package , it will definitely be able we can couple . professor c: so , we 're { nonvocalsound } committed to xml as the , interchange . but that 's , not a big deal . professor c: so , in terms of interchanging in and out of any module we build , it 'll be xml . and if you 're going off to queries to the ontology , you 'll have to deal with its interface . but that 's fine an and , all of these things have been built with much bigger projects than this in mind . so they have worked very hard . it 's blackboards and multi - wave blackboards and ways of interchanging and registering your a and . that i do n't think is even worth us worrying about just yet . if we can get the core of the thing to work , in a way that we 're comfortable with , then we ca we can get in and out of it with , xml , little descriptors . i believe . grad b: . , i like , the what you said about the getting input from just files about where you h where you have the data , have specified the features and . professor c: , you could have an x , you could make and xml format for that . professor c: that that , feature value xml format is probably as good a way as any . so it 's als , i it 's also worth , while you 're poking around , poke around for xml packages that , do things you 'd like . professor c: and the question is , d you c you 'll have to l we 'll have to l that should be ay we should be able to look at that grad b: no , u y the what i what came to my mind i is was the notion of an idea that if there are l nets that can actually lear try to set their own , probability factors based on input which is in file format , if we , get really w wild on this , we may actually want to use some corpora that other people made and , if they are in mate , then we get x m l documents with discourse annotations , t from the discourse act down to the phonetic level . , michael has a project where , recognizing discourse acts and he does it all in mate , and so they 're actually annotating data and data . so if we w if we think it 's worth it one of these days , not with this first prototype but maybe with a second , and we have the possibility of taking input that 's generated elsewhere and learn from that , that 'd be . professor c: it 'd be , but i do i do n't wanna count on it . , you ca n't run your project based on the speculation that the data will come , professor c: could happen . so in terms of the , the what the smartkom gives us for m - three - l packages , it could be that they 're fine , or it could be eeh . you do n't , you do n't really like it . so we 're not abs we 're not required to use their packages . we are required at the end to give them in their format , but hey . it 's , it does n't control what you do in , internally . professor c: ? bu w i 'd like that this y , this week , to ha to n to have y guys , , pick the y , belief - net package and tell us what it is , and give us a pointer so we can play with it . and , then as soon as we have it , we should start trying to populate it for this problem . make a first cut at , what 's going on , and probably the ea easiest way to do that is some on - line way . , you can f figure out whether you wanna make it a web site or , how grad b: i , ok , i t i was actually more joking . with the two or three days . so this was a usual jo grad b: , it will take as long as y yo you guys need for that . but , maybe it might be interesting if the two of you can agree on who 's gon na be the speaker next monday , to tell us something about the net you picked , and what it does , and how it does that . grad b: so , y so that will be the assignment for next week , is to for slides and whatever net you picked and what it can do and how far you ' ve gotten . pppt ! professor c: , i 'd like to also , though , ha have a first cut at what the belief - net looks like . even if it 's really crude . so , here a here are professor c: and , as i said , what i 'd like to do is , what would be really great is you bring it in if if we could , in the meeting , say , " here 's the package , here 's the current one we have , " what other ideas do you have ? " and then we can think about this idea of making up the data file . of , , get a t a p tentative format for it , let 's say xml , that says , l , " these are the various scenarios we ' ve experienced . " we can just add to that and there 'll be this file of them and when you think you ' ve got a better belief - net , you just run it against this , this data file . grad e: and what 's the relation to this with changing the table so that the system works in english ? grad b: so this is whi - while you were doing this , i received two lovely emails . the the full nt and the full linux version are there . i ' ve downloaded them both , and i started to unpack the linux one , the nt one worked fine . and i started unta pack the linux one , it told me that i ca n't really unpack it because it contains a future date . so this is the time difference between germany . i had to until one o ' clock this afternoon before i was able to unpack it . now , then it will be my job to get this whole thing running both on swede and on this machine . and so that we have it . and then hopefully that hoping that my urgent message will now come through to ralph and tilman that it will send some more documentation along , we i control p maybe that 's what i will do next monday is show the state and show the system and show that . professor c: . so the answer , johno , is that these are , at the moment , separate . , what one hopes is that when we understand how the analyzer works , we can both worry about converting it to english and worry about how it could ex extract the parameters we need for the belief - net . grad e: i my question was more about time frame . so we 're gon na do belief - nets this week , and then professor c: , . i . n none of this is i n neither of these projects has got a real tight time - line , in the sense that over the next month there 's a deliverable . s so , it 's opportu in that sense it 's opportunistic . if if , if we do n't get any information for these guys f for several weeks then we are n't gon na sit around , wasting time , trying to do the problem or what they , just pppt ! go on and do other things . grad b: , but the this point is really very , very valid that ultimately we hope that both will merge into a harmonious and , wonderful , state where we can not only do the bare necessities , ie , changing the table so it does exactly in english what it does in german , but also that we can have the system where we can say , " ok , this is what it usually does , and now we add this little thing to it " , whatever , johno 's and bhaskara 's great belief - net , and we plug it in , and then for these certain tasks , and we know that navigational tasks are gon na be a core domain of the new system , it all of a sudden it does much better . because it can produce better answers , tell the person , as i s showed you on this map , n , produce either , a red line that goes to the vista point or a red line that goes to the tango point or red line that goes to the door , which would be great . so not only can you show that something sensible but ultimately , if you produce a system like this , it takes the person where it wants to go . rather than taking him always to the geometric center of a building , grad b: which is what they do now . and we even had to take out a bit . nancy , you missed that part . we had to take out a bit of the road work . so that it does n't take you to the wall every time . grad b: so this was actually an actual problem that we encountered , which nobody have has because car navigation systems do n't really care . , they get you to the beginning of the street , some now do the house number . but even that is problematic . if you go d if you wanna drive to the sap in waldorf , i ' m the same is true of microsoft , it takes you to the address , whatever , street number blah - blah , you are miles away from the entrance . because the s postal address is maybe a mailbox somewhere . but the entrance where you actually wanna go is somewhere completely different . so unless you 're a mail person you really do n't wanna go there . professor c: probably not then , cuz y you probably ca n't drop the mail there anyway . grad d: so , you two , who 'll be working on this , li are you gl will you be doing , are you supposed to just do it by thinking about the situation ? can you use the sample data ? grad d: is it like , ho is there more than is there a lot s of sample data that is beyond what you have there ? grad b: there there 's more than i showed , but , this is , in part my job to look at that and to see whether there are features in there that can be extracted , and to come up with some features that are not , empirically based on a real experiment or on reality but on your intuition of , " aha ! this is maybe a sign for that , and this is maybe a sign for this . " grad f: so , . later this week we should get together , and start thinking about that , hopefully . professor c: ok . we can end the meeting and call adam , and then we wanna s look at some filthy pictures of heidelberg . we can do that as . grad b: and that 's why , when it was hit by , a cannon ball , it exploded . grad e: i first thought it had something to do with the material that it w that 's why i asked .
the initial task of the edu group is to work on inferring intentions through context. in the navigational paradigm used for the task , these intentions are to "see" to "enter" or to "get to the closest point of" a building. there will be purpose-designed experiments carried out. however , the starting point is , through the use of existing data , to determine possible linguistic , discourse or situation features that define intentionality. these may include the type of building , time of day , particular phrases used or whether the user is a tourist or a native. initially , these features will be hand-coded , but the goal is to find ways of extracting them automatically from the xml data. consequently , they will be fed into a belief-net -implemented on a software package like javabayes- and the conditional probability of each intention calculated. a prototype system will be put together to test hypotheses regarding both the exact nature of the features and how intentions are derived from them. inferring intentions in a navigational context is an appropriate task both in project and real-world terms. its goals are clearly defined and contributre to a smarter system. although there are some preliminary data to work on , task-specific experiments and recordings will take place. the xml format is going to be used has not been defined , although the smartkom data can be used as a foundation. in the first instance , the group will have to decide on the software package to be used for the creation and manipulation of the belief-nets. stability and ease-of-use -in addition to ability to handle xml- of the package are the focus at this stage , instead of the ability to handle large amounts of data. experts in bayes nets within icsi can be consulted on the matter. the decision will be presented in the next meeting along with a first schema of the belief-nets themselves. possible intermediate nodes can be added to the nets after this. the hypothesis that a set of features from which intentions can be derived exists has to be assessed , before moving on to how to extract these features from the data automatically. curent navigation systems do not provide for the user's particular intentions when asking for directions. they always compute the shortest path between source and destination. the smartkom parser , for example , does not mark up data with features adequate for the inference of these intentions. although it is understandable that language , discourse and situation features will play a role in how they are weighed , the exact nature of those features is unclear. also hard to evaluate at this stage , is how the assumed features will combine in a belief-net , in order to provide the conditional probabilities of the users' intentions. even if these problems are solved , extracting the features from the data may prove to be a bottleneck. the existing data are appropriate only for preliminary work , as they don't include intention-related information. on the other hand , the details of the experiments that will have to be designed to get more appropriate data are not clearcut and are yet to be settled. when asking for directions , a user of a navigational device may wish to either view , enter or simply approach a building. this was identified as an initial problem to be tackled through "deep understanding"-type inferences. there is a set of data to start work on from previous work. similarly , the smartkom data-format and an ontology developed for the tourist domain ( both in xml-standard formats ) can be used as groundwork for defining the features , which indicate the user's intentions. for the creation and management of the belief-nets necessary for the task , there are readily available packages -such as javabayes- and tools that can provide the infrastructure for a prototype system.
###dialogue: grad a: ok , we 're on . so just make that th your wireless mike is on , if you 're wearing a wireless . grad a: and you should be able to see which one you 're on by , watching the little bars change . grad a: so , actually , if you guys wanna go ahead and read digits now , as long as you ' ve signed the consent form , that 's alright . grad a: each individually . we 're talking about doing all at the same time but cognitively that would be really difficult . to try to read them while everyone else is . grad a: so , when you 're reading the digit strings , the first thing to do is just say which transcript you 're on . professor c: other way . we m we may wind up with ver we we may need versions of all this garbage . grad a: . so the first thing you 'd wanna do is just say which transcript you 're on . so . you can see the transcript ? there 's two large number strings on the digits ? so you would just read that one . and then you read each line with a small pause between the lines . and the pause is just so the person transcribing it can tell where one line ends and the other begins . and i 'll give i 'll read the digit strings first , so can see how that goes . again , i ' m not how much i should talk about before everyone 's here . grad a: ok . , why do n't i go ahead and read digit strings and then we can go on from there . grad a: so , just also a note on wearing the microphones . all of you look like you 're doing it reasonably correctly , but you want it about two thumb widths away from your mouth , and then , at the corner . and that 's so that you minimize breath sounds , so that when you 're breathing , you do n't breathe into the mike . , that 's good . and so , everyone needs to fill out , only once , the speaker form and the consent form . and the short form , you should read the consent form , but , the thing to notice is that we will give you an opportunity to edit a all the transcripts . so , if you say things and you do n't want them to be released to the general public , which , these will be available at some point to anyone who wants them , you 'll be given an opportunity by email , to bleep out any portions you do n't like . on the speaker form just fill out as much of the information as you can . if you 're not exactly about the region , we 're not exactly either . so , do n't worry too much about it . the it 's just self rating . and that 's about it . , should i do you want me to talk about why we 're doing this and what this project is ? professor c: whether she knows is another question . so are the people going to be identified by name ? grad a: , what we 're gon na we 'll anonymize it in the transcript . , but not in the audio . professor c: so , then in terms of people worrying about , excising things from the transcript , it 's unlikely . since it does is n't attributed . , i see , but the a but the grad a: right , so if i said , " , hi jerry , how are you ? " , we 're not gon na go through and cancel out the " jerry " s . , so we will go through and , in the speaker id tags there 'll be , m - one o seven , m - one o eight . , but , it w , i a good way of doing it on the audio , and still have people who are doing discourse research be able to use the data . grad a: but we find that we want the meeting to be as natural as possible . so , we 're trying to do real meetings . and so we do n't wanna have to do aliases and we do n't want people to be editing what they say . grad a: so that it 's better just as a pro post - process to edit out every time you bash microsoft . grad a: . so this is the project is called meeting recorder and there are lots of different aspects of the project . so my particular interest is in the pda of the future . this is a mock - up of one . yes , we do believe the pda of the future will be made of wood . the idea is that you 'd be able to put a pda at the table at an impromptu meeting , and record it , and then be able to do querying and retrieval later on , on the meeting . so that 's my particular interest , is a portable device to do m , information retrieval on meetings . other people are interested in other aspects of meetings . so the first step on that , in any of these , is to collect some data . and so what we wanted is a room that 's instrumented with both the table top microphones , and these are very high quality pressure zone mikes , as the close talking mikes . what the close talk ng talking mikes gives us is some ground truth , gives us , high quality audio , especially for people who are n't interested in the acoustic parts of this corpus . so , for people who are more interested in language , we did n't want to penalize them by having only the far field mikes available . and then also , it 's a very , very hard task in terms of speech recognition . and so , on the far field mikes we can expect very low recognition results . so we wanted the near field mikes to at least isolate the difference between the two . so that 's why we 're recording in parallel with the close talking and the far field at the same time . and then , all these channels are recorded simultaneously and framed synchronously so that you can also do things like , beam - forming on all the microphones and do research like that . our intention is to release this data to the public , probably through f through a body like the ldc . and , just make it as a generally available corpus . there 's other work going on in meeting recording . so , we 're working with sri , with uw , . nist has started an effort which will include video . we 're not including video , . and and then also , a small amount of assistance from ibm . is also involved . , and the digit strings , this is just a more constrained task . so because the general environment is so challenging , we decided to do at least one set of digit strings to give ourselves something easier . and it 's exactly the same digit strings as in ti - digits , which is a common connected digits corpus . so we 'll have some , comparison to be able to be made . anything else ? grad a: ok , so when the l last person comes in , just have them wear a wireless . it should be on already . either one of those . and , read the digit strings and fill out the forms . so , the most important form is the consent form , so just be s be everyone signs that , if they consent . grad b: i ' m it 's pretty usual for meetings that people come late , so you will have to leave what you set . grad a: and , just give me a call , which , my number 's up there when your meeting is over . and i ' m going to leave the mike here but it 's n { nonvocalsound } , but i ' m not gon na be on so do n't have them use this one . it 'll just be sitting here . professor c: , adam , we will be using the , screen as . so , . ! organization . so you guys who got email about this f , friday about what we 're up to . professor c: , this was about , inferring intentions from features in context , and the words , like " s go to see " , or " visit " , or some professor c: i these g have got better filters . cuz i sent it to everybody . you just blew it off . grad b: it 's really simple though . so this is the idea . we could pursue , if we thought it 's worth it but , we will agree on that , to come up with a very , very first crude prototype , and do some implementation work , and do some research , and some modeling . so the idea is if you want to go somewhere , and focus on that object down , actually walk with this . this is . down here . that 's the powder - tower . now , we found in our , data and from experiments , that there 's three things you can do . , you can walk this way , and come really , really close to it . and touch it . but you can not enter or do anything else . unless you 're interested in rock climbing , it wo n't do you no good standing there . it 's just a dark alley . but you can touch it . if you want to actually go up or into the tower , you have to go this way , and then through some buildings and up some stairs and . if you actually want to see the tower , and that 's what actually most people want to do , is just have a good look of it , take a picture for the family , you have to go this way , and go up here . and there you have a vre really view it exploded , the during the thirty years - war . really , interesting sight . and , these lines are , paths , grad b: or so that 's ab er , i the street network of our geographic information system . and you can tell that we deliberately cut out this part . because otherwise we could n't get our gis system to take to lead people this way . it would always use the closest point to the object , and then the tourists would be faced , in front of a wall , but it would do them no good . so , what we found interesting is , first of all , intentions differ . maybe you want to enter a building . maybe you want to see it , take a picture of it . or maybe you actually want to come as close as possible to the building . for whatever reason that may be . grad b: , maybe you would want to touch it . , i this , these intentions , we w we could , if we want to , call it the vista mode , where we just want to s get the overview or look at it , the enter mode , and the , tango mode . i always come up with silly names . so this " tango " means , literally translated , " to touch " . so but sometimes the tango mode is really relevant in the sense that , if you want to , if you do n't have the intention of entering your building , but that something is really close to it , and you just want to approach it , or get to that building . consider , the post office in chicago , a building so large that it has its own zip code . so the entrance could be miles away from the closest point . so sometimes it m makes sense maybe to d to distinguish there . so , i ' ve looked , through twenty some , i did n't look through all the data . , and there 's , a lot more different ways in people , the ways people phrase how to g get if they want to get to a certain place . and sometimes here it 's b it 's a little bit more obvious . maybe i should go back a couple of steps and go through the professor c: no , ok come in , sit down . if you grab yourself a microphone . grad b: ok . that was the idea . , people , when they w when they want to go to a building , sometimes they just want to look at it . sometimes they want to enter it . and sometimes they want to get really close to it . that 's something we found . it 's just a truism . and the places where you will lead them for these intentions are sometimes ex in incredibly different . i gave an example where the point where you end up if you want to look at it is completely different from where if you want to enter it . so , this is how people may , may phrase those requests to a mock - up system at least that 's the way they did it . and we get tons of these " how do i get to " , " i want to go to " , but also , " give me directions to " , and " i would like to see " . and , what we can do , if we look closer a closer at the data that was the wrong one . , we can look at some factors that may make a difference . first of all , very important , and , that i ' ve completely forgot that when we talked . this is a crucial factor , " what type of object is it ? " so , some buildings you just do n't want to take pictures of . or very rarely . but you usually want to enter them . some objects are more picturesque , and you more f more highly photographed . then the actual phrases may give us some idea of what the person wants . sometimes i found in the , looking at the data , in a superficial way , i found some s modifiers that m may also give us a hint , " i ' m trying to get to " nuh ? i need to get to . hints to the fact that you 're not really sightseeing and just f there for pleasure and so on . and this leads us straight to the context which also should be considered . that whatever it is you 're doing at the moment may also inter influence the interpretation of a phrase . so , this is , really , my suggestion is really simple . we start with , now , let me , say one more thing . what we do know , is that the parser we use in the smartkom system will never differentiate between any of these . so , all of these things will result in the same xml m - three - l structure . action " go " , and then an object . grad b: ? and a source . so it 's way too crude to d capture those differences in intentions . so , " mmm ! maybe for a deep understanding task , that 's a playground or first little thing . " where we can start it and n look ok , we need , we gon na get those m - three - l structures . bed002bdialogueact345 114153 114371 b grad s -1 0 the crude , undifferentiated parse . bed002bdialogueact346 114435 114555 b grad s^e -1 0 interpreted input . bed002bdialogueact347 114672 115529 b grad s -1 0 we may need additional part of speech , or maybe just some information on the verb , and modifiers , auxiliaries . bed002bdialogueact348 115586 11563 b grad s^rt -1 0 we 'll see . bed002bdialogueact349 115703 116124 b grad s^cc^rt +1 and i will try to come up with a list of factors that we need to get out of there , bed002bdialogueact350 116187 116562 b grad s +1 0 and maybe we want to get a g switch for the context . bed002bdialogueact351 116596 117036 b grad s -1 0 so this is not something which we can actually monitor , now , bed002bdialogueact352 117097 117246 b grad s -1 0 but just is something we can set . bed002bdialogueact353 117379 118088 b grad s -1 0 and then you can all imagine a constrained satisfaction program , depending on what , comes out . bed002bdialogueact354 11813 118438 b grad s%-- -1 0 we want to have an a structure resulting bed002bdialogueact355 118486 119762 b grad s +1 if we feed it through a belief - net along those lines . we 'd get an inferred intention , we produce a structure that differentiates between the vista , the enter , and the , tango mode . bed002bdialogueact356 119814 12011 b grad s -1 0 which we maybe want to ignore . bed002bdialogueact357 12012 120304 b grad s^rt -1 0 but . that 's my idea . bed002bdialogueact358 120322 120402 b grad s -1 0 it 's up for discussion . bed002bdialogueact359 120462 120586 b grad s -1 0 we can change all of it , bed002bdialogueact360 120586 120679 b grad s -1 0 any bit of it . bed002bdialogueact361 120869 120931 b grad s -1 0 throw it all away . bed002fdialogueact362 121157 121305 f grad s -1 0 now @ this email that you sent , actually . bed002cdialogueact363 12131 121335 c professor qw^br -1 0 what ? bed002fdialogueact364 121345 12143 f grad s -1 0 now i remember the email . bed002cdialogueact365 121477 121511 c professor s^bk -1 0 ok . bed002edialogueact366 121617 121644 e grad b -1 0 huh . bed002edialogueact367 121669 122029 e grad s -1 0 still , i have no recollection whatsoever of the email . bed002edialogueact368 122029 12214 e grad s -1 0 i 'll have to go back and check . bed002cdialogueact369 12218 122244 c professor s^ba^bd -1 0 not important . bed002cdialogueact370 122364 123063 c professor s -1 0 so , what is important is that we understand what the proposed task is . bed002cdialogueact371 123198 123574 c professor s -1 0 and , the i , robert and i talked about this some on friday . bed002cdialogueact372 123684 123973 c professor fh|s -1 0 and we think it 's - formed . bed002cdialogueact373 124027 124884 c professor s +1 2 so we think it 's a - formed , starter task for this , deeper understanding in the tourist domain . bed002fdialogueact374 125295 125527 f grad qw -1 0 so , where exactly is the , deeper understanding being done ? bed002fdialogueact375 125527 12571 f grad qy -1 0 like , s is it before the bayes - net ? bed002fdialogueact376 12571 125767 f grad qy%-- -1 0 is it , bed002cdialogueact377 125776 126026 c professor s -1 0 well , it 's the it 's always all of it . bed002cdialogueact378 126049 126429 c professor s -1 0 so , in general it 's always going to be , the answer is , everywhere . bed002cdialogueact379 126495 126966 c professor s -1 0 uh , so the notion is that , this is n't real deep . bed002cdialogueact380 127005 127699 c professor s -1 0 but it 's deep enough that you can distinguish between these th three quite different kinds of , going to see some tourist thing . bed002cdialogueact381 127794 128162 c professor s -1 0 and , so that 's the quote deep " that we 're trying to get at . professor c: and , robert 's point is that the current front - end does n't give you any way to not only does n't it do it , but it also does n't give you enough information to do it . it is n't like , if you just took what the front - end gives you , and used some clever inference algorithm on it , you would be able to figure out which of these is going on . so , and this is bu - i in general it 's gon na be true of any deep understanding , there 's gon na be contextual things , there 're gon na be linguistic things , there 're gon na be discourse things , and they got ta be combined . and , my idea on how to combine them is with a belief - net , although it may turn out that t some different thing is gon na work better . , the idea would be that you , take your you 're editing your slide ? grad b: . as i a , as i get ideas , w . so , discourse about that . that needs to go in there . professor c: . i ' m . ok . so . this is minutes taking minutes as we go , in his own way . , but the p the anyway . so , i , d naively speaking , you ' ve got a for this little task , a belief - net , which is going to have as output , the conditional pr probability of one of three things , that the person wants to , to view it , to enter it , or to tango with it . so that the output of the belief - net is pretty formed . and , then the inputs are going to be these kinds of things . and , then the question is there are two questions is , one , where do you get this i information from , and two , what 's the structure of the belief - net ? so what are the conditional probabilities of this , that , and the other , given these things ? and you probably need intermediate nodes . i we what they are yet . so it may be that , , that , knowing whether , another thing you want is some information abou , about the time of day . now , they may wanna call that part of context . but the time of day matters a lot . and , if things are closed , then , you professor c: pe - people do n't wanna enter them . and , if it 's not obvious , you may want to actually , point out to people that it 's closed , what they 're g going to is closed and they do n't have the option of entering it . professor c: so another thing that can come up , and will come up as soon as you get serious about this is , that another option is to have a more of a dialogue . so if someone says something you could ask them . and now , one thing you could do is always ask them , but that 's boring . and it also w it also be a pain for the person using it . so one thing you could do is build a little system that , said , " whenever you got a question like that i ' ve got one of three answers . ask them which one you want . " but that 's , not what we 're gon na do . grad b: but maybe that 's a false state of the system , that it 's too close to call . professor c: you want the you want the ability to a you want the ability to ask , but what you do n't wanna do is onl build a system that always asks every time , and i that 's not getting at the scientific problem , and it 's in general you 're , it 's gon na be much more complex than that . a this is purposely a really simple case . so , grad b: i have one more point to bhaskara 's question . , also the deep understanding part of it is going to be in there to the extent that we , want it in terms of our modeling . we can start , basic from human beings , model that , its motions , going , walking , seeing , we can mem model all of that and then compose whatever inferences o we make out of these really conceptual primitives . that will be extremely deep in the in my understanding . professor c: . s so the way that might come up , if you wanna suppose you wanted to do that , you might say , " , as an intermediate step in your belief - net , is there a source - path - goal schema involved ? " ok ? and if so , is there a focus on the goal ? or is there a focus on the path ? and that could be , one of the conditiona , th the in some piece of the belief - net , that could be the appropriate thing to enter . grad f: so , where would we extract that information from ? from the m - three - l ? professor c: no . see , the m - three - l is not gon na give th what he was saying is , the m - three - l does not have any of that . all it has is some really crude saying , " a person wants to go to a place . " professor c: m - three , m - three - l itself refers to multimedia mark - up language . professor c: so we have th w we have to have a better w way of referring to grad b: really , oder o th no , actually , intention lattices is what we 're gon na get . professor c: so , th they 're gon na give us some cr or we can assume that y you get this crude information . about intention , and that 's all they 're going to provide . and they do n't give you the object , they do n't give you any discourse history , if you want to keep that you have to keep it somewhere else . grad b: , they keep it . we have to request it . but it 's not in there . professor c: , they kee they keep it by their lights . it may it may or may not be what we want . grad e: so , if someone says , " i wanna touch the side of the powder - tower " , that would , we need to pop up tango mode and the directions ? professor c: if i if , if it got as simple as that , . but it would n't . grad e: but that does n't necessarily but we 'd have to infer a source - path - goal to some degree for touching the side , right ? grad b: , th the there is a p a point there if i understand you . correct ? because , sometimes people just say things this you find very often . where is the city hall ? and this do they do n't wanna sh see it on a map , or they do n't wanna know it 's five hundred yards away from you , or that it 's to the your north . they wanna go there . that 's what they say , is , " where is it ? " . where is that damn thing ? grad b: , that 's a question mark . sh a lot of parsers , just , that 's way beyond their scope , is of interpreting that . but , still outcome w the outcome will be some form of structure , with the town hall and maybe saying it 's a wh focus on the town hall . but to interpret it , somebody else has to do that job later . grad e: i ' m just trying to figure out what the smartkom system would output , depending on these things . grad b: , it will probably tell you how far away it is , at least that 's that 's even what deep map does . it tells you how far away it is , and shows it to you on a map . because i we can not differentiate , at the moment , between , the intention of wanting to go there or the intention of just know wanting to know where it is . grad d: people no might not be able to infer that either , right ? like the fact like , i could imagine if someone came up to me and asked , " where 's the city hall ? " , i might say , g ar " are you trying to get there ? " because how i describe , t its location , p probably depend on whether i should give them , directions now , or say , whatever , " it 's half a mile away " like that . grad b: it 's a granularity factor , because where people ask you , " where is new york ? " , you will tell them it 's on the east coast . grad b: y y you wo n't tell them how to get there , ft , take that bus to the airport and blah - blah . but if it 's the post office , you will tell them how to get there . so th they have done some interesting experiments on that in hamburg as . so . grad d: so that context was like , their presumed purpose context , i like business or travel , as the utterance context , like , " i ' m now standing at this place at this time " . professor c: , we ought to d a as we have all along , d we we ' ve been distu distinguishing between situational context , which is what you have as context , and discourse context , which you have as dh , professor c: whatever . so we can work out terminology later . so , they 're quite distinct . , you need them both , but they 're quite distinct . and , so what we were talking about doing , a as a first shot , is not doing any of the linguistics . except to find out what seems to be useful . so , the reason the belief - net is in blue , is the notion would be , this may be a bad dis bad idea , but the idea is to take as a first goal , see if we could actually build a belief - net that would make this three way distinction , in a plausible way , given these we have all these transcripts and we 're able to , by hand , extract the features to put in the belief - net . saying , " aha ! here 're the things which , if you get them out of the language and discourse , and put them into the belief - net , it would tell you which of these three , intentions is most likely . " and if to actually do that , build it , run it y run it on the data where you hand - transcribe the parameters . and see how that goes . if that goes , then we can start worrying about how we would extract them . so where would you get this information ? and , expand it to other things like this . but if we ca n't do that , then we 're in trouble . th i if you ca n't do this task , professor c: , , . it i i if it 's the belief - nets , we 'll switch to , logic or some terrible thing , but i do n't think that 's gon na be the case . that , if we can get the information , a belief - net is a perfectly good way of doing the inferential combination of it . the real issue is , do what are the factors involved in determining this ? and i . grad d: , i missed the beginning , but , i could you back to the slide , the previous one ? so , is it that it 's , these are all factors that , a these are the ones that you said that we are going to ignore now ? or that we want to take into account ? you were saying n professor c: how to extract them . so , f let 's find out which ones we need first , grad d: and and it 's clear from the data , like , sorta the correct answer in each case . but l professor c: let 's go back to th let 's go back to the slide of data . grad d: that 's that 's the thing i ' m curious ab like do we know from the data wh which so grad b: not from that data . but , since we are designing a an , compared to this , even bigger data collection effort , we will definitely take care to put it in there , in some shape , way , form over the other , to see whether we can , then , get empirically validated data . , from this , we can sometimes , an and that 's that but that is n't that what we need for a belief - net anyhow ? is s sometimes when people want to just see it , they phrase it more like this ? but it does n't exclude anybody from phrasing it differently , even if they still but then other factors may come into play that change the outcome of their belief - net . so , this is exactly what because y you can never be . and i ' m even i the most , deliberate data collection experiment will never give you data that say , " , if it 's phrased like that , the intention is this . " , because then , you grad d: u , the only way you could get that is if you were to give th the x subjects a task . right ? where you have where your , current goal is to grad b: we ! that 's what we 're doing . but but we will still get the phrasing all over the place . grad d: the no , that 's fine . i , it 's just knowing the intention from the experimental subject . professor c: from that task , . so , you all know this , but we are going to actually use this little room and start recording subjects probably within a month . so , this is not any lo any of you guys ' worry , except that we may want to push that effort to get information we need . so our job is to figure out how to solve these problems . if it turns out that we need data of a certain sort , then the data collection branch can be , asked to do that . and one of the reasons why we 're recording the meeting for these guys is cuz we want their help when we d we start doing , recording of subjects . so , y you 're right , though . no , you will not have , and there it is , and , but , y the , grad d: and the other concern that has come up before , too , is if it 's i if this was collected what situation this data was collected in . was it is it the one that you showed in your talk ? like people grad d: but so was this , like , someone actually mobile , like s using a device ? grad b: n no , no not i it was mobile but not with a w a real wizard system . so there were never answers . grad d: ok . but , is it i the situation of collecting th the data of , like here you could imagine them being walking around the city . as like one situation . and then you have all sorts of other c situational context factors that would influence w how to interpret , like you said , the scope and things like that . if they 're doing it in a , " i ' m sitting here with a map and asking questions " , i would imagine that the data would be really different . , so it 's just grad b: but it was never th the goal of that data collection to serve for sat for such a purpose . so that 's why the tasks were not differentiated by intentionality , there was n there was no label , intention a , intention b , intention c . or task a , b , c . i ' m we can produce some if we need it , that will help us along those lines . but , you got ta leave something for other people to model . so , to finding out what , situational con what the contextual factors of the situation really are , is an interesting s interesting thing . u i ' m , at the moment , curious and i ' m s w want to approach it from the end where we can s start with this toy system that we can play around with , so that we get a clearer notion of what input we need for that , what suffices and what does n't . and then we can start worrying about where to get this input , what do we need , ultimately once we are all experts in changing that parser , maybe , there 's just a couple three things we need to do and then we get more whatever , part of speech and more construction - type - like out of it . it 's a m pragmatic approach , at the moment . grad e: how exactly does the data collection work ? do they have a map , and then you give them a scenario of some sort ? grad b: ok . imagine you 're the subject . you 're gon na be in here , and somebody and and you see , either th the three - d model , or , a quicktime animation of standing u in a square in heidelberg . so you actually see that . the , first thing is you have to read a text about heidelberg . so , just off a textbook , tourist guide , to familiarize , yourself with that odd - sounding german street names , like fischergasse and . so that 's part one . part two is , you 're told that this huge new , wonderful computer system exists , that can y tell you everything you want to know , and it understands you completely . and so you 're gon na pick up that phone , dial a number , and you get a certain amount of tasks that you have to solve . first you have to know find out how to get to that place , maybe with the intention of buying stamps in there . maybe so , the next task is to get to a certain place and take a picture for your grandchild . the third one is to get information on the history of an object . the fourth one and then the g system breaks down . it crashes , and grad b: after the third task . and then or after the fourth . some find @ forget that for now . and then , a human operator comes on , and exp apologizes that the system has crashed , but , urges you to continue , ? now with a human operator . and so , you have the same tasks again , just with different objects , and you go through it again , and that was it . , and one little bit w and , the computer you are being told the computer system knows exactly where you are , via gps . when the human operator comes on , that person does not know . so the gps is crashed as . so the person first has to ask you " where are you ? " . and so you have to do some s tell the person where you are , depending on what you see there . , this is a bit that i d i do n't think we did we discuss that bit ? , squeezed that in now . but it 's something , that would provide some very interesting data for some people i know . grad d: so , in the display you can , you said that you cou you might have a display that shows , like , the grad d: so as you move through it that 's - they just track it on the for themselves grad b: i . i but y i do n't think you really move , that would be an enormous technical effort , unless we would we can show it walks to , . we can have movies of walking , you walking through heidelberg , and u ultimately arriving there . maybe we wanna do that . grad b: the map was intended to you want to go to that place . , and it 's there . and you see the label of the name so we get those names , pronunciation , and we can change that . grad d: so your tasks do n't require you to , yo you 're told so when your task is , i , " go buy stamps " like that ? so , do you have to respond ? or does your , what are you ste what are you supposed to be telling the system ? like , w what you 're doing now ? or grad d: there 's no ok , so it 's just like , " let 's figure out what they would say under the circumstances " . grad b: , and we will record both sides . , we will record the wi - the wizard , in both cases it 's gon na be a human , in the computer , and in the operator case . and we will re there will be some dialogue , ? so , you first have to do this , and that , and see wh what they say . we can ins instruct the , wizard in how expressive and talkative he should be . maybe the maybe what you 're suggesting is what you 're suggesting that it might be too poor , the data , if we limit it to this ping pong one t , task results in a question and then there 's an answer and that 's the end of the task ? you wanna m have it more steps , ? grad d: , i how much direction is given to the subject about what their interaction , th they 're unfamiliar w with interacting with the system . all they know is it 's this great system that could do s . professor c: , but to some extent this is a different discussion . so . , we have to have this discussion of th the experiment , and the data collection , and all that sorta and we do have , a student who is a candidate for wizard . , she 's gon na get in touch with me . it 's a student of eve 's . fey , fey ? spelled fey . do you do you grad d: she started taking the class last year and then did n't , did n't continue . i g she 's a g grad d: is she an undergradua she is a graduate , ok . , i m i know her very , very briefly . i know she was inter , interested in aspect and like that . professor c: so , anyway , she 's looking for some more part time work w while she 's waiting actually for graduate school . and she 'll be in touch . so we may have someone , to do this , and she 's got , some background in all this . and is a linguist st and , so so . that 's so , nancy , we 'll have an at some point we 'll have another discussion on exactly wha t , how that 's gon na go . and , jane , but also , liz have offered to help us do this , data collection and design and . professor c: so , when we get to that we 'll have some people doing it that they 're doing . grad d: i the reason i was asking about the de the details of this thing is that , it 's one thing to collect data for , i , speech recognition or various other tasks that have pretty c clear correct answers , but with intention , as you point out , there 's a lot of di other factors and i ' m not really , how e the question of how to make it a t appropriate toy version of that , it 's ju it 's just hard . so , it 's a grad e: , actually i that was my question . is the intention implicit in the scenario that 's given ? like , do the professor c: so , we that 's part of what we 'll have to figure out . but , the the problem that i was tr gon na try to focus on today was , let 's suppose by magic you could collect dialogues in which , one way or the other , you were able to , figure out both the intention , and set the context , and language was used . so let 's suppose that we can get that data . the issue is , can we find a way to , featurize it so that we get some discrete number of features so that , when we know the values to all those features , or as many as possible , we can w come up with the best estimate of which of the , in this case three little intentions , are most likely . grad d: w what are the t three intentions ? is it to go there , to see it , and professor c: go back . to v to view it . to enter it . now those it seems to me those are cl you c you have no trouble with those being distinct . take a picture of it you might want to be a really rather different place than entering it . and , for an object that 's big , getting to the nearest part of it , could be quite different than either of those . just grad d: ok , so now i understand the referent of tango mode . i did n't get that before . grad d: , like , how close are you gon na be ? like , tango 's really close . grad f: all these so , like , the question is how what features can like , do you wanna try to extract from , say , the parse or whatever ? like , the presence of a word or the presence of a certain , stem , or certain construction or whatever . professor c: is there a construction , or the object , or w , anything else that 's in the si it 's either in the s the discourse itself or in the context . so if it turns out that , whatever it is , you want to know whether the person 's , a tourist or not , ok ? that becomes a feature . now , how you determine that is another issue . but fo for the current problem , it would just be , " ok , if you can be that it 's a tourist , versus a businessman , versus a native , " , that would give you a lot of discriminatory power and then just have a little section in your belief - net that said , " pppt ! " though sin f in the short run , you 'd set them , professor c: and see ho how it worked , and then in the longer run , you would figure out how you could derive them . from previous discourse or w any anything else you knew . grad f: so , how should what 's the , plan ? like , how should we go about figuring out these professor c: ok . so , first of all is , do e either of you guys , you got a favorite belief - net that you ' ve , played with ? javabayes ? professor c: , anyway . f get one . ok ? so y so one of th one of the things we wanna do is actually , pick a package , does n't matter which one , presumably one that 's got good interactive abilities , cuz a lot of what we 're gon na be d , we do n't need the one that 'll solve massive , belief - nets quickly . d w these are not gon na get big in the foreseeable future . but we do want one in which it 's easy to interact with and , modify . because i that 's a lot of what it 's gon na be , is , playing with this . and probably one in which it 's easy to have , what amounts to transcript files . so that if we have all these cases so we make up cases that have these features , ok , and then you 'd like to be able to say , " ok , here 's a bunch of cases " there 're even ones tha that you can do learning ok ? so you have all their cases and their results and you have a algorithms to go through and run around trying to set the probabilities for you . , probably that 's not worth it . , my is we are n't gon na have enough data that 's good enough to make the these data fitting ones worth it , but i . so i would say you guy the first task for you two guys is to , pick a package . ok , and you wanna it s , the standard things you want it stable , you want it , @ . and , as soon as we have one , we can start trying to , make a first cut at what 's going on . professor c: but it what i like about it is it 's very concrete . ok ? we we have a we the outcomes are gon na be , and we have some data that 's loose , we can use our own intuition , and see how hard it is , and , importantly , what intermediate nodes we think we need . so it if it turns out that just , thinking about the problem , you come up with things you really need to , this is the thing that is , an intermediate little piece in your belief - net . that 'd be really interesting . grad b: and it and it may serve as a platform for a person , maybe me , or whoever , who is interested in doing some linguistic analysis . , w we have the for - framenet group here , and we can see what they have found out about those concepts already , that are contained in the data , , to come up with a little set of features and , maybe even means of s , extracting them . and and that altogether could also be , become a paper that 's going to be published somewhere , if we sit down and write it . when you said javabayes belief - net you were talking about ones that run on coffee ? or that are in the program language java ? professor c: no , th it turns out that there is a , the new end of java libraries . ok , and it turns out one called professor c: which is one that fair people around here use a fair amount . i have no idea whether that 's the obvious advantage of that is that you can then , relatively easily , get all the other java packages for guis or whatever else you might want to do . so that i that 's why a lot of people doing research use that . but it may not be i have no idea whether that 's the best choice an and there 're plenty of people around , students in the department who , live and breathe bayes - nets . professor c: nancy knows him . i whether you guys have met kevin yet or not , but , grad b: but i but since we all probably are pretty that , the , this th the dialogue history is , producing xml documents . m - three - l is xml . and the ontology that , the student is constructing for me back in eml is in oil and that 's also in xml . and so that 's where a lot of knowledge about bakeries , about hotels , about castles and is gon na come from . , so , if it has that io capability and if it 's a java package , it will definitely be able we can couple . professor c: so , we 're { nonvocalsound } committed to xml as the , interchange . but that 's , not a big deal . professor c: so , in terms of interchanging in and out of any module we build , it 'll be xml . and if you 're going off to queries to the ontology , you 'll have to deal with its interface . but that 's fine an and , all of these things have been built with much bigger projects than this in mind . so they have worked very hard . it 's blackboards and multi - wave blackboards and ways of interchanging and registering your a and . that i do n't think is even worth us worrying about just yet . if we can get the core of the thing to work , in a way that we 're comfortable with , then we ca we can get in and out of it with , xml , little descriptors . i believe . grad b: . , i like , the what you said about the getting input from just files about where you h where you have the data , have specified the features and . professor c: , you could have an x , you could make and xml format for that . professor c: that that , feature value xml format is probably as good a way as any . so it 's als , i it 's also worth , while you 're poking around , poke around for xml packages that , do things you 'd like . professor c: and the question is , d you c you 'll have to l we 'll have to l that should be ay we should be able to look at that grad b: no , u y the what i what came to my mind i is was the notion of an idea that if there are l nets that can actually lear try to set their own , probability factors based on input which is in file format , if we , get really w wild on this , we may actually want to use some corpora that other people made and , if they are in mate , then we get x m l documents with discourse annotations , t from the discourse act down to the phonetic level . , michael has a project where , recognizing discourse acts and he does it all in mate , and so they 're actually annotating data and data . so if we w if we think it 's worth it one of these days , not with this first prototype but maybe with a second , and we have the possibility of taking input that 's generated elsewhere and learn from that , that 'd be . professor c: it 'd be , but i do i do n't wanna count on it . , you ca n't run your project based on the speculation that the data will come , professor c: could happen . so in terms of the , the what the smartkom gives us for m - three - l packages , it could be that they 're fine , or it could be eeh . you do n't , you do n't really like it . so we 're not abs we 're not required to use their packages . we are required at the end to give them in their format , but hey . it 's , it does n't control what you do in , internally . professor c: ? bu w i 'd like that this y , this week , to ha to n to have y guys , , pick the y , belief - net package and tell us what it is , and give us a pointer so we can play with it . and , then as soon as we have it , we should start trying to populate it for this problem . make a first cut at , what 's going on , and probably the ea easiest way to do that is some on - line way . , you can f figure out whether you wanna make it a web site or , how grad b: i , ok , i t i was actually more joking . with the two or three days . so this was a usual jo grad b: , it will take as long as y yo you guys need for that . but , maybe it might be interesting if the two of you can agree on who 's gon na be the speaker next monday , to tell us something about the net you picked , and what it does , and how it does that . grad b: so , y so that will be the assignment for next week , is to for slides and whatever net you picked and what it can do and how far you ' ve gotten . pppt ! professor c: , i 'd like to also , though , ha have a first cut at what the belief - net looks like . even if it 's really crude . so , here a here are professor c: and , as i said , what i 'd like to do is , what would be really great is you bring it in if if we could , in the meeting , say , " here 's the package , here 's the current one we have , " what other ideas do you have ? " and then we can think about this idea of making up the data file . of , , get a t a p tentative format for it , let 's say xml , that says , l , " these are the various scenarios we ' ve experienced . " we can just add to that and there 'll be this file of them and when you think you ' ve got a better belief - net , you just run it against this , this data file . grad e: and what 's the relation to this with changing the table so that the system works in english ? grad b: so this is whi - while you were doing this , i received two lovely emails . the the full nt and the full linux version are there . i ' ve downloaded them both , and i started to unpack the linux one , the nt one worked fine . and i started unta pack the linux one , it told me that i ca n't really unpack it because it contains a future date . so this is the time difference between germany . i had to until one o ' clock this afternoon before i was able to unpack it . now , then it will be my job to get this whole thing running both on swede and on this machine . and so that we have it . and then hopefully that hoping that my urgent message will now come through to ralph and tilman that it will send some more documentation along , we i control p maybe that 's what i will do next monday is show the state and show the system and show that . professor c: . so the answer , johno , is that these are , at the moment , separate . , what one hopes is that when we understand how the analyzer works , we can both worry about converting it to english and worry about how it could ex extract the parameters we need for the belief - net . grad e: i my question was more about time frame . so we 're gon na do belief - nets this week , and then professor c: , . i . n none of this is i n neither of these projects has got a real tight time - line , in the sense that over the next month there 's a deliverable . s so , it 's opportu in that sense it 's opportunistic . if if , if we do n't get any information for these guys f for several weeks then we are n't gon na sit around , wasting time , trying to do the problem or what they , just pppt ! go on and do other things . grad b: , but the this point is really very , very valid that ultimately we hope that both will merge into a harmonious and , wonderful , state where we can not only do the bare necessities , ie , changing the table so it does exactly in english what it does in german , but also that we can have the system where we can say , " ok , this is what it usually does , and now we add this little thing to it " , whatever , johno 's and bhaskara 's great belief - net , and we plug it in , and then for these certain tasks , and we know that navigational tasks are gon na be a core domain of the new system , it all of a sudden it does much better . because it can produce better answers , tell the person , as i s showed you on this map , n , produce either , a red line that goes to the vista point or a red line that goes to the tango point or red line that goes to the door , which would be great . so not only can you show that something sensible but ultimately , if you produce a system like this , it takes the person where it wants to go . rather than taking him always to the geometric center of a building , grad b: which is what they do now . and we even had to take out a bit . nancy , you missed that part . we had to take out a bit of the road work . so that it does n't take you to the wall every time . grad b: so this was actually an actual problem that we encountered , which nobody have has because car navigation systems do n't really care . , they get you to the beginning of the street , some now do the house number . but even that is problematic . if you go d if you wanna drive to the sap in waldorf , i ' m the same is true of microsoft , it takes you to the address , whatever , street number blah - blah , you are miles away from the entrance . because the s postal address is maybe a mailbox somewhere . but the entrance where you actually wanna go is somewhere completely different . so unless you 're a mail person you really do n't wanna go there . professor c: probably not then , cuz y you probably ca n't drop the mail there anyway . grad d: so , you two , who 'll be working on this , li are you gl will you be doing , are you supposed to just do it by thinking about the situation ? can you use the sample data ? grad d: is it like , ho is there more than is there a lot s of sample data that is beyond what you have there ? grad b: there there 's more than i showed , but , this is , in part my job to look at that and to see whether there are features in there that can be extracted , and to come up with some features that are not , empirically based on a real experiment or on reality but on your intuition of , " aha ! this is maybe a sign for that , and this is maybe a sign for this . " grad f: so , . later this week we should get together , and start thinking about that , hopefully . professor c: ok . we can end the meeting and call adam , and then we wanna s look at some filthy pictures of heidelberg . we can do that as . grad b: and that 's why , when it was hit by , a cannon ball , it exploded . grad e: i first thought it had something to do with the material that it w that 's why i asked . ###summary: the initial task of the edu group is to work on inferring intentions through context. in the navigational paradigm used for the task , these intentions are to "see" to "enter" or to "get to the closest point of" a building. there will be purpose-designed experiments carried out. however , the starting point is , through the use of existing data , to determine possible linguistic , discourse or situation features that define intentionality. these may include the type of building , time of day , particular phrases used or whether the user is a tourist or a native. initially , these features will be hand-coded , but the goal is to find ways of extracting them automatically from the xml data. consequently , they will be fed into a belief-net -implemented on a software package like javabayes- and the conditional probability of each intention calculated. a prototype system will be put together to test hypotheses regarding both the exact nature of the features and how intentions are derived from them. inferring intentions in a navigational context is an appropriate task both in project and real-world terms. its goals are clearly defined and contributre to a smarter system. although there are some preliminary data to work on , task-specific experiments and recordings will take place. the xml format is going to be used has not been defined , although the smartkom data can be used as a foundation. in the first instance , the group will have to decide on the software package to be used for the creation and manipulation of the belief-nets. stability and ease-of-use -in addition to ability to handle xml- of the package are the focus at this stage , instead of the ability to handle large amounts of data. experts in bayes nets within icsi can be consulted on the matter. the decision will be presented in the next meeting along with a first schema of the belief-nets themselves. possible intermediate nodes can be added to the nets after this. the hypothesis that a set of features from which intentions can be derived exists has to be assessed , before moving on to how to extract these features from the data automatically. curent navigation systems do not provide for the user's particular intentions when asking for directions. they always compute the shortest path between source and destination. the smartkom parser , for example , does not mark up data with features adequate for the inference of these intentions. although it is understandable that language , discourse and situation features will play a role in how they are weighed , the exact nature of those features is unclear. also hard to evaluate at this stage , is how the assumed features will combine in a belief-net , in order to provide the conditional probabilities of the users' intentions. even if these problems are solved , extracting the features from the data may prove to be a bottleneck. the existing data are appropriate only for preliminary work , as they don't include intention-related information. on the other hand , the details of the experiments that will have to be designed to get more appropriate data are not clearcut and are yet to be settled. when asking for directions , a user of a navigational device may wish to either view , enter or simply approach a building. this was identified as an initial problem to be tackled through "deep understanding"-type inferences. there is a set of data to start work on from previous work. similarly , the smartkom data-format and an ontology developed for the tourist domain ( both in xml-standard formats ) can be used as groundwork for defining the features , which indicate the user's intentions. for the creation and management of the belief-nets necessary for the task , there are readily available packages -such as javabayes- and tools that can provide the infrastructure for a prototype system.
38
phd b: i . do you have news from the conference talk ? , that was programmed for yesterday i . professor c: i know now i you 're talking about . no , nobody 's told me anything . professor c: no , that would have been a good thing to find out before this meeting , that 's . no , i have no idea . so , let 's assume for right now that we 're just plugging on ahead , because even if they tell us that , the rules are different , we 're still interested in doing what we 're doing . so what are you doing ? phd b: - . , we ' ve a little bit worked on trying to see , what were the bugs and the problem with the latencies . phd b: so , we took first we took the lda filters and , we designed new filters , using recursive filters actually . professor c: so when you say " we " , is that something sunil is doing or is that ? phd b: so we took the filters the fir filters and we designed , iir filters that have the same frequency response . phd b: , similar , but that have shorter delays . so they had two filters , one for the low frequency bands and another for the high frequency bands . and so we redesigned two filters . and the low frequency band has sixty - four milliseconds of delay , and the high frequency band filter has something like eleven milliseconds compared to the two hundred milliseconds of the iir filters . but it 's not yet test . so we have the filters but we still have to implement a routine that does recursive filtering professor c: no , because the whole problem that happened before was coordination , right ? so so you need to discuss with him what we 're doing , cuz they could be doing the same thing and . phd b: i if th that 's what they were trying to they were trying to do something different like taking , using filter that takes only a past phd b: this is just a little bit different . but i will send him an email and tell him exactly what we are doing , so . professor c: we just we just have to be in contact more . that the fact that we did that with had that thing with the latencies was indicative of the fact that there was n't enough communication . so . phd b: , there is w one , remark about these filters , that they do n't have a linear phase . , i , perhaps it does n't hurt because the phase is almost linear but . and so , for the delay i gave you here , it 's , computed on the five hertz modulation frequency , which is the mmm , the most important for speech this is the first thing . professor c: so that would be , a reduction of a hundred and thirty - six milliseconds , phd b: but there are other points actually , which will perhaps add some more delay . is that some other in the process were perhaps not very perf , not very correct , like the downsampling which w was simply dropping frames . so we will try also to add a downsampling having a filter that , a low - pass filter at twenty - five hertz . , because wh when we look at the lda filters , they are low - pass but they leave a lot of what 's above twenty - five hertz . and so , this will be another filter which would add ten milliseconds again . and then there 's a third thing , is that , the way on - line normalization was done , is just using this recursion on the , on the feature stream , and but this is a filter , so it has also a delay . and when we look at this filter actually it has a delay of eighty - five milliseconds . so if we phd b: if we want to be very correct , so if we want to the estimation of the mean t to be , the right estimation of the mean , we have to t to take eighty - five milliseconds in the future . mmm . phd b: but , when we add up everything it 's it will be alright . we would be at six so , sixty - five , plus ten , plus for the downsampling , plus eighty - five for the on - line normalization . so it 's plus eighty for the neural net and pca . professor c: two - fifty , unless they changed the rules . which there is there 's some discussion of . professor c: but , the people who had very low latency want it to be low , very narrow , latency bound . and the people who have longer latency do n't . professor c: so they were more or less trading computation for performance and we were , trading latency for performance . and they were dealing with noise explicitly and we were n't , and so of it as complementary , that if we can put the professor c: complementary . the best systems so , everything that we did in a way it was just adamantly insisting on going in with a brain damaged system , which is something actually , we ' ve done a lot over the last thirteen years . , which is we say , this is the way we should do it . and then we do it . and then someone else does something that 's straight forward . so , w th w this was a test that largely had additive noise and we did we adde did nothing explicitly to handle ad additive noise . professor c: we just , , trained up systems to be more discriminant . and , we did this , rasta - like filtering which was done in the log domain and was tending to handle convolutional noise . we did we actually did nothing about additive noise . so , the , spectral sub subtraction schemes a couple places did seem to do a job . and so , we 're talking about putting some of that in while still keeping some of our . you should be able to end up with a system that 's better than both but clearly the way that we 're operating for this other does involved some latency to get rid of most of that latency . to get down to forty or fifty milliseconds we 'd have to throw out most of what we 're doing . and and , i do n't think there 's any good reason for it in the application actually . , you 're speaking to a recognizer on a remote server and , having a quarter second for some processing to clean it up . it does n't seem like it 's that big a deal . professor c: these are n't large vocabulary things so the decoder should n't take a really long time , and . phd a: and i do n't think anybody 's gon na notice the difference between a quarter of a second of latency and thirty milliseconds of latency . professor c: no . what what does wa was your experience when you were doing this with , the surgical , microscopes and . , how long was it from when somebody , finished an utterance to when , something started happening ? phd a: , we had a silence detector , so we would look for the end of an utterance based on the silence detector . and i ca n't remember now off the top of my head how many frames of silence we had to detect before we would declare it to be the end of an utterance . , but it was , i would say it was probably around the order of two hundred and fifty milliseconds . professor c: so you had a quarter second delay before , plus some little processing time , and then the microscope would start moving . and there 's physical inertia there , so probably the motion itself was all phd a: and it felt to , the users that it was instantaneous . , as fast as talking to a person . it th i do n't think anybody ever complained about the delay . professor c: so you would think as long as it 's under half a second . , i ' m not an expert on that but . phd a: i do n't remember the exact numbers but it was something like that . i do n't think you can really tell . a person i do n't think a person can tell the difference between , , a quarter of a second and a hundred milliseconds , and i ' m not even if we can tell the difference between a quarter of a second and half a second . it just it feels so quick . professor c: , if you said , , " what 's the , what 's the shortest route to the opera ? " and it took half a second to get back to you , it would be f , it might even be too abrupt . you might have to put in a s a delay . phd a: , it may feel different than talking to a person because when we talk to each other we tend to step on each other 's utterances . so like if i ' m asking you a question , you may start answering before i ' m even done . so it would probably feel different but i do n't think it would feel slow . professor c: , anyway , we could cut we else , we could cut down on the neural net time by , playing around a little bit , going more into the past , like that . we t we talked about that . phd a: so is the latency from the neural net caused by how far ahead you 're looking ? professor c: and there 's also , there 's the neural net and there 's also this , multi - frame , klt . phd a: was n't there was it in the , recurrent neural nets where they were n't looking ahead ? professor c: they were n't looking ahead much . they p they looked ahead a little bit . professor c: , you could do this with a recurrent net . and and then but you also could just , , we have n't experimented with this but i imagine you could , , predict a , a label , from more in the past than in the future . , we ' ve d we ' ve done some with that before . it works ok . professor c: but we ' ve but we played a little bit with asymmetric , guys . you can do it . so , that 's what you 're busy with , s messing around with this , and , phd d: also we were thinking to , apply the , spectral subtraction from ericsson and to change the contextual klt for lda . phd d: klt , i ' m . , to change and use lda discriminative . i . phd b: , it 's that by the for the moment we have , something that 's discriminant and nonlinear . and the other is linear but it 's not discriminant . , it 's a linear transformation , that professor c: so at least just to understand maybe what the difference was between how much you were getting from just putting the frames together and how much you 're getting from the discriminative , what the nonlinearity does for you or does n't do for you . just to understand it a little better i . phd b: actually what we want to do , perhaps it 's to replace to have something that 's discriminant but linear , also . and to see if it improves ov over the non - discriminant linear transformation . and if the neural net is better than this or , . professor c: , that 's what i meant , is to see whether it having the neural net really buys you anything . professor c: , it doe did look like it buys you something over just the klt . but maybe it 's just the discrimination and maybe , maybe the nonlinear discrimination is n't necessary . professor c: good good to know . but the other part you were saying was the spectral subtraction , so you just , at what stage do you do that ? do you 're doing that , ? phd d: we no nnn we we was thinking to do before after vad or , we exactly when it 's better . before after vad professor c: , one thing that would be no good to find out about from this conference call is that what they were talking about , what they 're proposing doing , was having a third party , run a good vad , and determine boundaries . and then given those boundaries , then have everybody do the recognition . professor c: the reason for that was that , if some one p one group put in the vad and another did n't , or one had a better vad than the other since that they 're not viewing that as being part of the task , and that any manufacturer would put a bunch of effort into having some s good speech - silence detection . it still would n't be perfect but , e the argument was " let 's not have that be part of this test . " let 's let 's separate that out . , i they argued about that yesterday and , i ' m , i do n't the answer but we should find out . i ' m we 'll find out soon what they , what they decided . so , so there 's the question of the vad but otherwise it 's on the , the mel fil filter bank , energies i ? professor c: you do doing the ? and you 're subtracting in the i it 's power domain , or magnitude domain . probably power domain , right ? professor c: , if you look at the theory , it 's it should be in the power domain but , i ' ve seen implementations where people do it in the magnitude domain and i have asked people why and they shrug their shoulders and say , " , it works . " and there 's this i there 's this mysterious people who do this a lot i have developed little tricks of the trade . , there 's this , you do n't just subtract the estimate of the noise spectrum . you subtract th that times phd b: and generated this , so you have the estimation of the power spectra of the noise , and you multiply this by a factor which is depend dependent on the snr . so . phd b: when the speech lev when the signal level is more important , compared to this noise level , the coefficient is small , and around one . but when the power le the s signal level is small compared to the noise level , the coefficient is more important . and this reduce actually the music musical noise , which is more important during silence portions , when the s the energy 's small . so there are tricks like this but , mmm . professor c: , that 's what differs from different tasks and different s , spectral subtraction methods . , if you have , fair assurance that , the noise is quite stationary , then the smartest thing to do is use as much data as possible to estimate the noise , get a much better estimate , and subtract it off . but if it 's varying , which is gon na be the case for almost any real situation , you have to do it on - line , with some forgetting factor . phd a: so do you is there some long window that extends into the past over which you calculate the average ? professor c: , there 's a lot of different ways of computing the noise spectrum . so one of the things that , hans - guenter hirsch did , and pas and other people actually , he 's he was n't the only one i , was to , take some period of speech and in each band , develop a histogram . so , to get a decent histogram of these energies takes at least a few seconds really . but , you can do it with a smaller amount but it 's pretty rough . and , the nist standard method of determining signal - to - noise ratio is based on this . professor c: so no , no , it 's based on this method , this histogram method . so you have a histogram . now , if you have signal and you have noise , you have these two bumps in the histogram , which you could approximate as two gaussians . professor c: so you have a mixture of two gaussians . right ? and you can use em to figure out what it is . so so now you have this mixture of two gaussians , you n they are , , you estimate what they are , and , so this gives you what the signal is and what the noise e energy is in that band in the spectrum . and then you look over the whole thing and now you have a noise spectrum . so , hans - guenter hirsch and others have used that method . and the other thing to do is which is more trivial and obvious is to , determine through magical means that , there 's no speech in some period , and then see what the spectrum is . , it 's that 's tricky to do . it has mistakes . , and if you ' ve got enough time , this other method appears to be somewhat more reliable . , a variant on that for just determining signal - to - noise ratio is to just , you can do a w a an iterative thing , em - like thing , to determine means only . i it is em still , but just determine the means only . do n't worry about the variances . and then you just use those mean values as being the , signal - to - noise ratio in that band . phd a: but what is the it seems like this thing could add to the latency . , depending on where the window was that you used to calculate the signal - to - noise ratio . professor c: if you just , if you just if you , a at the beginning you have some esti some phd b: actually , it 's a mmm if - if you want to have a good estimation on non - stationary noise you have to look in the future . , if you take your window and build your histogram in this window , what you can expect is to have an estimation of th of the noise in the middle of the window , not at the end . so the but people phd b: the they just look in the past . i it works because the noise are , pret , almost stationary professor c: , the thing , e , y , you 're talking about non - stationary noise but that spectral subtraction is rarely is not gon na work really for non - stationary noise , phd b: , if y if you have a good estimation of the noise , because it has to work . professor c: so so that what is wh what 's more common is that you 're going to be helped with r slowly varying or stationary noise . that 's what spectral subtraction will help with , practically speaking . if it varies a lot , to get a if if to get a good estimate you need a few seconds of speech , even if it 's centered , if you need a few seconds to get a decent estimate but it 's changed a lot in a few seconds , then it , i it 's a problem . , imagine e five hertz is the middle of the speech modulation spectrum , so imagine a jack hammer going at five hertz . phd b: so in this case , , you can not but y , hirsch does experiment with windows of like between five hundred milliseconds and one second . and , five hundred wa was not so bad . and he worked on non - stationary noises , like noise modulated with , wi with amplitude modulations things like that , phd b: , in the paper he showed that actually the estimation of the noise is delayed . , it 's there is you have to center the window , professor c: no , i understand it 's better to do but think that , for real noises wh what 's most likely to happen is that there 'll be some things that are relatively stationary where you can use one or another spectral subtraction thing and other things where it 's not so stationary , you can always pick something that falls between your methods , but i if , if sinusoidally , modul amplitude modulated noise is a big problem in practice . that it 's phd a: we could probably get a really good estimate of the noise if we just went to the noise files , and built the averages from them . phd b: but if the noise is stationary perhaps you do n't even need some noise estimation algorithm . we just take th the beginning of the utterance i know p i if people tried this for aurora . professor c: right , the word " stationary " is has a very precise statistical meaning . but , in signal - processing really what we 're talking about is things that change slowly , compared with our processing techniques . so if you 're driving along in a car i would think that most of the time the nature of the noise is going to change relatively slowly . it 's not gon na stay absolute the same . if you if you check it out , five minutes later you may be in a different part of the road or whatever . but it 's i using the local characteristics in time , is probably going to work pretty . but you could get hurt a lot if you just took some something from the beginning of all the speech , of , an hour of speech and then later , so they may be , may be overly , complicated for this test but , i . but what you 're saying , makes sense , though . , if possible you should n't you should make it , the center of the window . , we 're already having problems with these delay , delay issues . phd a: if they 're going to provide a , voice activity detector that will tell you the boundaries of the speech , then , could n't you just go outside those boundaries and do your estimate there ? professor c: you bet . so i imagine that 's what they 're doing , is they 're probably looking in nonspeech sections and getting some , phd b: they have some threshold on the previous estimate , . , ericsson used this threshold . , so , they h they have an estimate of the noise level and they put a threshold like six or ten db above , what 's under this threshold is used to update the estimate . is is that right or ? professor c: does france telecom do this does france telecom do th do the same thing ? more or less ? professor c: if we 're done with that , let 's see . , maybe we can talk about a couple other things briefly , just , things that we ' ve been chatting about but have n't made it into these meetings yet . so you 're coming up with your quals proposal , and , wanna just give a two three minute summary of what you 're planning on doing ? grad e: , two , three , it can be shorter than that . , i ' ve talked to some of you already . , but i ' m , looking into extending the work done by larry saul and john allen and mazin rahim . , they have a system that 's , a multi - band , system but their multi - band is a little different than the way that we ' ve been doing multi - band in the past , where where we ' ve been @ taking sub - band features and i training up these neural nets and on phonetic targets , and then combining them some somehow down the line , they 're taking sub - band features and , training up a detector that detects for , these phonetic features , he presents , a detector to detect sonorance . and so what it is , it 's there 's at the lowest level , there it 's an or ga , it 's an and gate . so , on each sub - band you have several independent tests , to test whether , there 's the existence of sonorance in a sub - band . and then , it c it 's combined by a soft and gate . and at the higher level , for every if , the higher level there 's a soft or gate . , so if this detector detects , the presence of sonorance in any of the sub - bands , then the detect , the or gate at the top says , " ok , this frame has evidence of sonorance . " grad e: and these are all , ok . , the low level detectors are logistic regressions . and the , professor c: so that 's all it is . it 's a sig it 's a sigmoid , with weighted sum at the input , which you train by gradient descent . grad e: right . so he uses , an em algorithm to train up these parameters for the logistic regression . professor c: so i was using em to get the targets . so so you have this and gate what we were calling an and gate , but it 's a product rule thing at the output . and then he uses , i u and then feeding into that are i ' m , there 's it 's an or at the output , is n't it ? professor c: so that 's the product . and then , then he has each of these and things . and , so they 're little neural units . they have to have targets . and so the targets come from em . phd a: and so are each of these , low level detectors are they , are these something that you decide ahead of time , like " i ' m going to look for this particular feature or i ' m going to look at this frequency , " or what what are they looking at ? what are their inputs ? grad e: right , so the ok , so at each for each sub - band there are , several measures of snr and correlation . and he said there 's like twenty of these per sub - band . and for every s every sub - band , e you just pick ahead of time , " i ' m going to have like five i independent logistic tests . " and you initialize these parameters , in some way and use em to come up with your training targets for a for the low - level detectors . and then , once you get that done , you train the whole thing on maximum likelihood . and h he shows that using this method to detect sonorance is it 's very robust compared to , to typical , full - band gaussian mixtures estimations of sonorance . and , so that 's just one detector . so you can imagine building many of these detectors on different features . you get enough of these detectors together , then you have enough information to do , higher level discrimination , discriminating between phones and then you keep working your way up until you build a full recognizer . so , that 's the direction which i ' m thinking about going in my quals . professor c: , it has a number of properties that i really liked . , one is the going towards , using narrow band information for , ph phonetic features of some sort rather than just , immediately going for the typical sound units . another thing i like about it is that you t this thing is going to be trained explicitly trained for a product of errors rule , which is what , allen keeps pointing out that fletcher observed in the twenties , for people listening to narrow band . that 's friday 's talk , . and then , , the third thing i like about it is , and we ' ve played around with this in a different way a little bit but it has n't been our dominant way of operating anything , this issue of where the targets come from . so in our case when we ' ve been training it multi - band things , the way we get the targets for the individual bands is , that we get the phonetic label for the sound there and we say , " ok , we train every " what this is saying is , ok , that 's maybe what our ultimate goal is or not ultimate but penultimate goal is getting these small sound units . but but , along the way how much should we , what should we be training these intermediate things for ? , because , we , that this is a particularly good feature . , there 's no way , someone in the audience yesterday was asking , " could n't you have people go through and mark the individual bands and say where the where it was sonorant or not ? " but , having a bunch of people listening to critical band wide , chunks of speech trying to determine whether it 'd be impossible . professor c: it 's all gon na sound like sine waves to you , more or less . not , it 's g all g narrow band i i m it 's very hard for someone to a person to make that determination . , we do n't really know how those should be labeled . it could sh be that you should , not be paying that much attention to , certain bands for certain sounds , in order to get the best result . so , what we have been doing there , just mixing it all together , is certainly much cruder than that . we trained these things up on the , the final label . now we have i done experiments you ' ve probably done where you have , done separate , viterbis on the different grad e: , it helps for one or t one iteration but , anything after that it does n't help . professor c: so so that may or may t it that aspect of what he 's doing may or may not be helpful because in a sense that 's the same thing . you 're taking global information and determining what you how you should but this is , i th a little more direct . professor c: and , he 's look he 's just actually looking at , the confusions between sonorant and non - sonorant . so he has n't applied it to recognition or if he did he did n't talk about it . it 's it 's just and one of the concerns in the audience , actually , was that , the , he did a comparison to , , our old foil , the nasty old standard recognizer with mel filter bank at the front , and h m ms , and . and , it did n't do nearly as , especially in noise . but the one of the good questions in the audience was , , but that was n't trained for that . , this use of a very smooth , spectral envelope is something that , has evolved as being generally a good thing for speech recognition but if you knew that what you were gon na do is detect sonorants or not so sonorants and non - sonorants is almost like voiced - unvoiced , except i that the voiced stops are also called " obstruents " . so it 's , but with the exception of the stops i it 's the same as voiced - unvoiced , so so if you knew you were doing that , if you were doing something say for a , a vocoder , you would n't use the same features . you would use something that was sensitive to the periodicity and not just the envelope . , and so in that sense it was an unfair test . so that the questioner was right . it it was in that sense an unfair test . nonetheless , it was one that was interesting because , this is what we are actually using for speech recognition , these smooth envelopes . and this says that perhaps even , trying to use them in the best way that we can , that we ordinarily do , with , gaussian mixtures and h m ms and , you do n't , actually do that on determining whether something is sonorant or not . professor c: which means you 're gon na make errors between similar sounds that are son sonorant or obstruent . phd a: did n't they also do some an oracle experiment where they said " if we could detect the sonorants perfectly and then show how it would improve speech recognition ? i remember hearing about an experiment like that . professor c: the - these same people ? i do n't remember that . that would that 's you 're right , that 's exactly the question to follow up this discussion , is suppose you did that , got that right . phd b: what could be the other low level detectors , for other features , in addition to detecting sonorants th - that 's what you want to go for also grad e: let 's see , i d i . e , w easiest thing would be to go do some voicing but that 's very similar to sonorance . phd a: when we when we talked with john ohala the other day we made a list of some of the things that w like frication , professor c: now this was coming at it from a different angle but maybe it 's a good way to start . , these are things which , john felt that a , a human annotator would be able to reliably mark . so the things he felt would be difficult for a human annotator to reliably mark would be tongue position kinds of things . professor c: but stress does n't , fit in this thing of coming up with features that will distinguish words from one another , it 's a it 's a good thing to mark and will probably help us ultimate with recognition phd a: there 's a few cases where it can like permit and permit . but that 's not very common in english . in other languages it 's more , important . professor c: but i either case you 'd write permit , so you 'd get the word right . phd a: no , i ' m saying , i e you were saying that stress does n't help you distinguish between words . , i see what you 're saying . as long as you get the sequence , professor c: we 're g if we 're doing if we 're talking about transcription as opposed to something else phd a: right ? , . so where it could help is maybe at a higher level . professor c: but that 's this afternoon 's meeting . we do n't understand anything in this meeting . so that 's , a neat thing grad e: s so , ohala 's going to help do these , transcriptions of the meeting data ? phd a: i . we d we did n't get that far . , we just talked about some possible features that could be marked by humans and , because of having maybe some extra transcriber time we thought we could go through and mark some portion of the data for that . and , professor c: , that 's not an immediate problem , that we do n't immediately have a lot of extra transcriber time . professor c: but but , in the long term i chuck is gon na continue the dialogue with john and , we 'll end up doing some . phd a: i ' m definitely interested in this area , too , f , acoustic feature . professor c: , it 's an interesting way to go . i say it like " said - int " . it has a number of good things . so , y you want to talk maybe a c two or three minutes about what we ' ve been talking about today and other days ? grad f: ri , ok , so , we 're interested in , methods for far mike speech recognition , mainly , methods that deal with the reverberation in the far mike signal . so , one approach would be , say msg and plp , like was used in aurora one and , there are other approaches which actually attempt to remove the reverberation , instead of being robust to it like msg . and so we 're interested in , comparing the performance of , a robust approach like msg with these , speech enhancement or de - reverber de - reverberation approaches . and , it looks like we 're gon na use the meeting recorder digits data for that . phd b: and the de - reverberation algorithm , do you have can you give some more details on this or ? does it use one microphone ? grad f: , there was something that was done by , a guy named carlos , i forget his last name , who worked with hynek , who , grad f: ok . who , , it was like rasta in the sense that of it was , de - convolution by filtering , except he used a longer time window , like a second maybe . and the reason for that is rasta 's time window is too short to , include the whole , reverberation , i what you call it the reverberation response . i if you see wh if you see what . the reverberation filter from my mouth to that mike is like it 's t got it 's too long in the time domain for the rasta filtering to take care of it . and , then there are a couple of other speech enhancement approaches which have n't been tried for speech recognition yet but have just been tried for enhancement , which , have the assumption that , you can do lpc analysis of th of the signal you get at the far microphone and the , all pole filter that you get out of that should be good . it 's just the , excitation signal that is going to be distorted by the reverberation and so you can try and reconstruct a better excitation signal and , feed that through the i , all pole filter and get enhanced speech with reverberation reduced . professor c: there 's also this , , echo cancellation that we ' ve been chasing , so , we have , and when we 're saying these digits now we do have a close microphone signal and then there 's the distant microphone signal . and you could as a baseline say , " ok , given that we have both of these , we should be able to do , a cancellation . " so that , , we , essentially identify the system in between the linear time invariant system between the microphones and re and invert it , or cancel it out to some reasonable approximation through one method or another . , that 's not a practical thing , if you have a distant mike , you do n't have a close mike ordinarily , but we thought that might make also might make a good baseline . , it still wo n't be perfect because there 's noise . , but and then there are s , there are single microphone methods that people have done for , for this de - reverberation . do y do any references to any ? cuz i w i was w i lead him down a bad path on that . phd b: i g i when people are working with single microphones , they are more trying to do phd b: , not very , there is the avendano work , but also trying to mmm , trying to f t find the de - convolution filter but in the not in the time domain but in the stream of features i . , @ there 's someone working on this on i in mons so perhaps , we should try t to he 's working on this , on trying to on re reverberation , professor c: the first paper on this is gon na have great references , tell already . it 's always good to have references , especially when reviewers read it or one of the authors and , feel they 'll " you 're ok , you ' ve r you cited me . " phd b: , he did echo cancellation and he did some fancier things like , , training different network on different reverberation conditions and then trying to find the best one , but . professor c: the oth the other thing , that dave was talking about earlier was , multiple mike things , where they 're all distant . so , , there 's all this work on arrays , but the other thing is , what can we do that 's cleverer that can take some advantage of only two mikes , particularly if there 's an obstruction between them , as we have over there . professor c: an obstruction between them . it creates a shadow which is helpful . it 's part of why you have such good directionality with , with two ears even though they 're not several feet apart . for most for most people 's heads . professor c: so that , the head , in the way , is really that 's what it 's for . it 's , professor c: it 's to separate the ears . that 's right , so . anyway , o k . , that 's all we have this week . and , it 's digit time . phd a: actually the , for some reason the digit forms are blank . , th that may be due to the fact that adam ran out of digits , and did n't have time to regenerate any . professor c: i it 's there 's no real reason to write our names on here then , phd a: cuz we put that into the " key " files . but w that 's why we have the forms , even if there are no digits . professor c: i did n't notice this . i ' m sitting here and i was about to read them too . it 's a , blank sheet of paper .
the berkeley meeting recorder group discussed the progress of several of their members. the progress being made on the group's main project , a speech recogniser for the cellular industry was reported. the group also touched upon matters that had broader implications for the work , such as the work of other groups on the same project. there were also some progress reports from group members working on other projects. no one from the group attended a recent video conference about their main project , but they need to find out what was discussed in it. until they do , they will continue on , assuming nothing major has been changed. need to discuss any new investigation with partners to make sure work is not repeated. there was a recent video conference meeting discussing the cellular project , but no one from the group attended and so do not know if it has any implications for their work , if any important decisions were made. this includes decisions on the desired latency for the system , since the group is currently at the limit. spectral subtraction , which the group is currently investigating as a method of dealing with noise , may add to the delay time , but also it is hard to do with non-linear noise. speakers mn007 and fn002 have been working on the groups main project , looking for bugs in the system , and trying to improve latency. the group's work currently has the highest latency on the project , and they are looking for ways to cut the delays. these include replacing fir filters with iir , and investigating spectral subtraction methods which do not require taking the future into account. speaker me006 has put together a proposal to extend work on a multiband system using low-level detectors , and applying it to recognition. speaker me026 has been looking at method for recognition using far mics , trying to deal with reverberation and echo-cancellation.
###dialogue: phd b: i . do you have news from the conference talk ? , that was programmed for yesterday i . professor c: i know now i you 're talking about . no , nobody 's told me anything . professor c: no , that would have been a good thing to find out before this meeting , that 's . no , i have no idea . so , let 's assume for right now that we 're just plugging on ahead , because even if they tell us that , the rules are different , we 're still interested in doing what we 're doing . so what are you doing ? phd b: - . , we ' ve a little bit worked on trying to see , what were the bugs and the problem with the latencies . phd b: so , we took first we took the lda filters and , we designed new filters , using recursive filters actually . professor c: so when you say " we " , is that something sunil is doing or is that ? phd b: so we took the filters the fir filters and we designed , iir filters that have the same frequency response . phd b: , similar , but that have shorter delays . so they had two filters , one for the low frequency bands and another for the high frequency bands . and so we redesigned two filters . and the low frequency band has sixty - four milliseconds of delay , and the high frequency band filter has something like eleven milliseconds compared to the two hundred milliseconds of the iir filters . but it 's not yet test . so we have the filters but we still have to implement a routine that does recursive filtering professor c: no , because the whole problem that happened before was coordination , right ? so so you need to discuss with him what we 're doing , cuz they could be doing the same thing and . phd b: i if th that 's what they were trying to they were trying to do something different like taking , using filter that takes only a past phd b: this is just a little bit different . but i will send him an email and tell him exactly what we are doing , so . professor c: we just we just have to be in contact more . that the fact that we did that with had that thing with the latencies was indicative of the fact that there was n't enough communication . so . phd b: , there is w one , remark about these filters , that they do n't have a linear phase . , i , perhaps it does n't hurt because the phase is almost linear but . and so , for the delay i gave you here , it 's , computed on the five hertz modulation frequency , which is the mmm , the most important for speech this is the first thing . professor c: so that would be , a reduction of a hundred and thirty - six milliseconds , phd b: but there are other points actually , which will perhaps add some more delay . is that some other in the process were perhaps not very perf , not very correct , like the downsampling which w was simply dropping frames . so we will try also to add a downsampling having a filter that , a low - pass filter at twenty - five hertz . , because wh when we look at the lda filters , they are low - pass but they leave a lot of what 's above twenty - five hertz . and so , this will be another filter which would add ten milliseconds again . and then there 's a third thing , is that , the way on - line normalization was done , is just using this recursion on the , on the feature stream , and but this is a filter , so it has also a delay . and when we look at this filter actually it has a delay of eighty - five milliseconds . so if we phd b: if we want to be very correct , so if we want to the estimation of the mean t to be , the right estimation of the mean , we have to t to take eighty - five milliseconds in the future . mmm . phd b: but , when we add up everything it 's it will be alright . we would be at six so , sixty - five , plus ten , plus for the downsampling , plus eighty - five for the on - line normalization . so it 's plus eighty for the neural net and pca . professor c: two - fifty , unless they changed the rules . which there is there 's some discussion of . professor c: but , the people who had very low latency want it to be low , very narrow , latency bound . and the people who have longer latency do n't . professor c: so they were more or less trading computation for performance and we were , trading latency for performance . and they were dealing with noise explicitly and we were n't , and so of it as complementary , that if we can put the professor c: complementary . the best systems so , everything that we did in a way it was just adamantly insisting on going in with a brain damaged system , which is something actually , we ' ve done a lot over the last thirteen years . , which is we say , this is the way we should do it . and then we do it . and then someone else does something that 's straight forward . so , w th w this was a test that largely had additive noise and we did we adde did nothing explicitly to handle ad additive noise . professor c: we just , , trained up systems to be more discriminant . and , we did this , rasta - like filtering which was done in the log domain and was tending to handle convolutional noise . we did we actually did nothing about additive noise . so , the , spectral sub subtraction schemes a couple places did seem to do a job . and so , we 're talking about putting some of that in while still keeping some of our . you should be able to end up with a system that 's better than both but clearly the way that we 're operating for this other does involved some latency to get rid of most of that latency . to get down to forty or fifty milliseconds we 'd have to throw out most of what we 're doing . and and , i do n't think there 's any good reason for it in the application actually . , you 're speaking to a recognizer on a remote server and , having a quarter second for some processing to clean it up . it does n't seem like it 's that big a deal . professor c: these are n't large vocabulary things so the decoder should n't take a really long time , and . phd a: and i do n't think anybody 's gon na notice the difference between a quarter of a second of latency and thirty milliseconds of latency . professor c: no . what what does wa was your experience when you were doing this with , the surgical , microscopes and . , how long was it from when somebody , finished an utterance to when , something started happening ? phd a: , we had a silence detector , so we would look for the end of an utterance based on the silence detector . and i ca n't remember now off the top of my head how many frames of silence we had to detect before we would declare it to be the end of an utterance . , but it was , i would say it was probably around the order of two hundred and fifty milliseconds . professor c: so you had a quarter second delay before , plus some little processing time , and then the microscope would start moving . and there 's physical inertia there , so probably the motion itself was all phd a: and it felt to , the users that it was instantaneous . , as fast as talking to a person . it th i do n't think anybody ever complained about the delay . professor c: so you would think as long as it 's under half a second . , i ' m not an expert on that but . phd a: i do n't remember the exact numbers but it was something like that . i do n't think you can really tell . a person i do n't think a person can tell the difference between , , a quarter of a second and a hundred milliseconds , and i ' m not even if we can tell the difference between a quarter of a second and half a second . it just it feels so quick . professor c: , if you said , , " what 's the , what 's the shortest route to the opera ? " and it took half a second to get back to you , it would be f , it might even be too abrupt . you might have to put in a s a delay . phd a: , it may feel different than talking to a person because when we talk to each other we tend to step on each other 's utterances . so like if i ' m asking you a question , you may start answering before i ' m even done . so it would probably feel different but i do n't think it would feel slow . professor c: , anyway , we could cut we else , we could cut down on the neural net time by , playing around a little bit , going more into the past , like that . we t we talked about that . phd a: so is the latency from the neural net caused by how far ahead you 're looking ? professor c: and there 's also , there 's the neural net and there 's also this , multi - frame , klt . phd a: was n't there was it in the , recurrent neural nets where they were n't looking ahead ? professor c: they were n't looking ahead much . they p they looked ahead a little bit . professor c: , you could do this with a recurrent net . and and then but you also could just , , we have n't experimented with this but i imagine you could , , predict a , a label , from more in the past than in the future . , we ' ve d we ' ve done some with that before . it works ok . professor c: but we ' ve but we played a little bit with asymmetric , guys . you can do it . so , that 's what you 're busy with , s messing around with this , and , phd d: also we were thinking to , apply the , spectral subtraction from ericsson and to change the contextual klt for lda . phd d: klt , i ' m . , to change and use lda discriminative . i . phd b: , it 's that by the for the moment we have , something that 's discriminant and nonlinear . and the other is linear but it 's not discriminant . , it 's a linear transformation , that professor c: so at least just to understand maybe what the difference was between how much you were getting from just putting the frames together and how much you 're getting from the discriminative , what the nonlinearity does for you or does n't do for you . just to understand it a little better i . phd b: actually what we want to do , perhaps it 's to replace to have something that 's discriminant but linear , also . and to see if it improves ov over the non - discriminant linear transformation . and if the neural net is better than this or , . professor c: , that 's what i meant , is to see whether it having the neural net really buys you anything . professor c: , it doe did look like it buys you something over just the klt . but maybe it 's just the discrimination and maybe , maybe the nonlinear discrimination is n't necessary . professor c: good good to know . but the other part you were saying was the spectral subtraction , so you just , at what stage do you do that ? do you 're doing that , ? phd d: we no nnn we we was thinking to do before after vad or , we exactly when it 's better . before after vad professor c: , one thing that would be no good to find out about from this conference call is that what they were talking about , what they 're proposing doing , was having a third party , run a good vad , and determine boundaries . and then given those boundaries , then have everybody do the recognition . professor c: the reason for that was that , if some one p one group put in the vad and another did n't , or one had a better vad than the other since that they 're not viewing that as being part of the task , and that any manufacturer would put a bunch of effort into having some s good speech - silence detection . it still would n't be perfect but , e the argument was " let 's not have that be part of this test . " let 's let 's separate that out . , i they argued about that yesterday and , i ' m , i do n't the answer but we should find out . i ' m we 'll find out soon what they , what they decided . so , so there 's the question of the vad but otherwise it 's on the , the mel fil filter bank , energies i ? professor c: you do doing the ? and you 're subtracting in the i it 's power domain , or magnitude domain . probably power domain , right ? professor c: , if you look at the theory , it 's it should be in the power domain but , i ' ve seen implementations where people do it in the magnitude domain and i have asked people why and they shrug their shoulders and say , " , it works . " and there 's this i there 's this mysterious people who do this a lot i have developed little tricks of the trade . , there 's this , you do n't just subtract the estimate of the noise spectrum . you subtract th that times phd b: and generated this , so you have the estimation of the power spectra of the noise , and you multiply this by a factor which is depend dependent on the snr . so . phd b: when the speech lev when the signal level is more important , compared to this noise level , the coefficient is small , and around one . but when the power le the s signal level is small compared to the noise level , the coefficient is more important . and this reduce actually the music musical noise , which is more important during silence portions , when the s the energy 's small . so there are tricks like this but , mmm . professor c: , that 's what differs from different tasks and different s , spectral subtraction methods . , if you have , fair assurance that , the noise is quite stationary , then the smartest thing to do is use as much data as possible to estimate the noise , get a much better estimate , and subtract it off . but if it 's varying , which is gon na be the case for almost any real situation , you have to do it on - line , with some forgetting factor . phd a: so do you is there some long window that extends into the past over which you calculate the average ? professor c: , there 's a lot of different ways of computing the noise spectrum . so one of the things that , hans - guenter hirsch did , and pas and other people actually , he 's he was n't the only one i , was to , take some period of speech and in each band , develop a histogram . so , to get a decent histogram of these energies takes at least a few seconds really . but , you can do it with a smaller amount but it 's pretty rough . and , the nist standard method of determining signal - to - noise ratio is based on this . professor c: so no , no , it 's based on this method , this histogram method . so you have a histogram . now , if you have signal and you have noise , you have these two bumps in the histogram , which you could approximate as two gaussians . professor c: so you have a mixture of two gaussians . right ? and you can use em to figure out what it is . so so now you have this mixture of two gaussians , you n they are , , you estimate what they are , and , so this gives you what the signal is and what the noise e energy is in that band in the spectrum . and then you look over the whole thing and now you have a noise spectrum . so , hans - guenter hirsch and others have used that method . and the other thing to do is which is more trivial and obvious is to , determine through magical means that , there 's no speech in some period , and then see what the spectrum is . , it 's that 's tricky to do . it has mistakes . , and if you ' ve got enough time , this other method appears to be somewhat more reliable . , a variant on that for just determining signal - to - noise ratio is to just , you can do a w a an iterative thing , em - like thing , to determine means only . i it is em still , but just determine the means only . do n't worry about the variances . and then you just use those mean values as being the , signal - to - noise ratio in that band . phd a: but what is the it seems like this thing could add to the latency . , depending on where the window was that you used to calculate the signal - to - noise ratio . professor c: if you just , if you just if you , a at the beginning you have some esti some phd b: actually , it 's a mmm if - if you want to have a good estimation on non - stationary noise you have to look in the future . , if you take your window and build your histogram in this window , what you can expect is to have an estimation of th of the noise in the middle of the window , not at the end . so the but people phd b: the they just look in the past . i it works because the noise are , pret , almost stationary professor c: , the thing , e , y , you 're talking about non - stationary noise but that spectral subtraction is rarely is not gon na work really for non - stationary noise , phd b: , if y if you have a good estimation of the noise , because it has to work . professor c: so so that what is wh what 's more common is that you 're going to be helped with r slowly varying or stationary noise . that 's what spectral subtraction will help with , practically speaking . if it varies a lot , to get a if if to get a good estimate you need a few seconds of speech , even if it 's centered , if you need a few seconds to get a decent estimate but it 's changed a lot in a few seconds , then it , i it 's a problem . , imagine e five hertz is the middle of the speech modulation spectrum , so imagine a jack hammer going at five hertz . phd b: so in this case , , you can not but y , hirsch does experiment with windows of like between five hundred milliseconds and one second . and , five hundred wa was not so bad . and he worked on non - stationary noises , like noise modulated with , wi with amplitude modulations things like that , phd b: , in the paper he showed that actually the estimation of the noise is delayed . , it 's there is you have to center the window , professor c: no , i understand it 's better to do but think that , for real noises wh what 's most likely to happen is that there 'll be some things that are relatively stationary where you can use one or another spectral subtraction thing and other things where it 's not so stationary , you can always pick something that falls between your methods , but i if , if sinusoidally , modul amplitude modulated noise is a big problem in practice . that it 's phd a: we could probably get a really good estimate of the noise if we just went to the noise files , and built the averages from them . phd b: but if the noise is stationary perhaps you do n't even need some noise estimation algorithm . we just take th the beginning of the utterance i know p i if people tried this for aurora . professor c: right , the word " stationary " is has a very precise statistical meaning . but , in signal - processing really what we 're talking about is things that change slowly , compared with our processing techniques . so if you 're driving along in a car i would think that most of the time the nature of the noise is going to change relatively slowly . it 's not gon na stay absolute the same . if you if you check it out , five minutes later you may be in a different part of the road or whatever . but it 's i using the local characteristics in time , is probably going to work pretty . but you could get hurt a lot if you just took some something from the beginning of all the speech , of , an hour of speech and then later , so they may be , may be overly , complicated for this test but , i . but what you 're saying , makes sense , though . , if possible you should n't you should make it , the center of the window . , we 're already having problems with these delay , delay issues . phd a: if they 're going to provide a , voice activity detector that will tell you the boundaries of the speech , then , could n't you just go outside those boundaries and do your estimate there ? professor c: you bet . so i imagine that 's what they 're doing , is they 're probably looking in nonspeech sections and getting some , phd b: they have some threshold on the previous estimate , . , ericsson used this threshold . , so , they h they have an estimate of the noise level and they put a threshold like six or ten db above , what 's under this threshold is used to update the estimate . is is that right or ? professor c: does france telecom do this does france telecom do th do the same thing ? more or less ? professor c: if we 're done with that , let 's see . , maybe we can talk about a couple other things briefly , just , things that we ' ve been chatting about but have n't made it into these meetings yet . so you 're coming up with your quals proposal , and , wanna just give a two three minute summary of what you 're planning on doing ? grad e: , two , three , it can be shorter than that . , i ' ve talked to some of you already . , but i ' m , looking into extending the work done by larry saul and john allen and mazin rahim . , they have a system that 's , a multi - band , system but their multi - band is a little different than the way that we ' ve been doing multi - band in the past , where where we ' ve been @ taking sub - band features and i training up these neural nets and on phonetic targets , and then combining them some somehow down the line , they 're taking sub - band features and , training up a detector that detects for , these phonetic features , he presents , a detector to detect sonorance . and so what it is , it 's there 's at the lowest level , there it 's an or ga , it 's an and gate . so , on each sub - band you have several independent tests , to test whether , there 's the existence of sonorance in a sub - band . and then , it c it 's combined by a soft and gate . and at the higher level , for every if , the higher level there 's a soft or gate . , so if this detector detects , the presence of sonorance in any of the sub - bands , then the detect , the or gate at the top says , " ok , this frame has evidence of sonorance . " grad e: and these are all , ok . , the low level detectors are logistic regressions . and the , professor c: so that 's all it is . it 's a sig it 's a sigmoid , with weighted sum at the input , which you train by gradient descent . grad e: right . so he uses , an em algorithm to train up these parameters for the logistic regression . professor c: so i was using em to get the targets . so so you have this and gate what we were calling an and gate , but it 's a product rule thing at the output . and then he uses , i u and then feeding into that are i ' m , there 's it 's an or at the output , is n't it ? professor c: so that 's the product . and then , then he has each of these and things . and , so they 're little neural units . they have to have targets . and so the targets come from em . phd a: and so are each of these , low level detectors are they , are these something that you decide ahead of time , like " i ' m going to look for this particular feature or i ' m going to look at this frequency , " or what what are they looking at ? what are their inputs ? grad e: right , so the ok , so at each for each sub - band there are , several measures of snr and correlation . and he said there 's like twenty of these per sub - band . and for every s every sub - band , e you just pick ahead of time , " i ' m going to have like five i independent logistic tests . " and you initialize these parameters , in some way and use em to come up with your training targets for a for the low - level detectors . and then , once you get that done , you train the whole thing on maximum likelihood . and h he shows that using this method to detect sonorance is it 's very robust compared to , to typical , full - band gaussian mixtures estimations of sonorance . and , so that 's just one detector . so you can imagine building many of these detectors on different features . you get enough of these detectors together , then you have enough information to do , higher level discrimination , discriminating between phones and then you keep working your way up until you build a full recognizer . so , that 's the direction which i ' m thinking about going in my quals . professor c: , it has a number of properties that i really liked . , one is the going towards , using narrow band information for , ph phonetic features of some sort rather than just , immediately going for the typical sound units . another thing i like about it is that you t this thing is going to be trained explicitly trained for a product of errors rule , which is what , allen keeps pointing out that fletcher observed in the twenties , for people listening to narrow band . that 's friday 's talk , . and then , , the third thing i like about it is , and we ' ve played around with this in a different way a little bit but it has n't been our dominant way of operating anything , this issue of where the targets come from . so in our case when we ' ve been training it multi - band things , the way we get the targets for the individual bands is , that we get the phonetic label for the sound there and we say , " ok , we train every " what this is saying is , ok , that 's maybe what our ultimate goal is or not ultimate but penultimate goal is getting these small sound units . but but , along the way how much should we , what should we be training these intermediate things for ? , because , we , that this is a particularly good feature . , there 's no way , someone in the audience yesterday was asking , " could n't you have people go through and mark the individual bands and say where the where it was sonorant or not ? " but , having a bunch of people listening to critical band wide , chunks of speech trying to determine whether it 'd be impossible . professor c: it 's all gon na sound like sine waves to you , more or less . not , it 's g all g narrow band i i m it 's very hard for someone to a person to make that determination . , we do n't really know how those should be labeled . it could sh be that you should , not be paying that much attention to , certain bands for certain sounds , in order to get the best result . so , what we have been doing there , just mixing it all together , is certainly much cruder than that . we trained these things up on the , the final label . now we have i done experiments you ' ve probably done where you have , done separate , viterbis on the different grad e: , it helps for one or t one iteration but , anything after that it does n't help . professor c: so so that may or may t it that aspect of what he 's doing may or may not be helpful because in a sense that 's the same thing . you 're taking global information and determining what you how you should but this is , i th a little more direct . professor c: and , he 's look he 's just actually looking at , the confusions between sonorant and non - sonorant . so he has n't applied it to recognition or if he did he did n't talk about it . it 's it 's just and one of the concerns in the audience , actually , was that , the , he did a comparison to , , our old foil , the nasty old standard recognizer with mel filter bank at the front , and h m ms , and . and , it did n't do nearly as , especially in noise . but the one of the good questions in the audience was , , but that was n't trained for that . , this use of a very smooth , spectral envelope is something that , has evolved as being generally a good thing for speech recognition but if you knew that what you were gon na do is detect sonorants or not so sonorants and non - sonorants is almost like voiced - unvoiced , except i that the voiced stops are also called " obstruents " . so it 's , but with the exception of the stops i it 's the same as voiced - unvoiced , so so if you knew you were doing that , if you were doing something say for a , a vocoder , you would n't use the same features . you would use something that was sensitive to the periodicity and not just the envelope . , and so in that sense it was an unfair test . so that the questioner was right . it it was in that sense an unfair test . nonetheless , it was one that was interesting because , this is what we are actually using for speech recognition , these smooth envelopes . and this says that perhaps even , trying to use them in the best way that we can , that we ordinarily do , with , gaussian mixtures and h m ms and , you do n't , actually do that on determining whether something is sonorant or not . professor c: which means you 're gon na make errors between similar sounds that are son sonorant or obstruent . phd a: did n't they also do some an oracle experiment where they said " if we could detect the sonorants perfectly and then show how it would improve speech recognition ? i remember hearing about an experiment like that . professor c: the - these same people ? i do n't remember that . that would that 's you 're right , that 's exactly the question to follow up this discussion , is suppose you did that , got that right . phd b: what could be the other low level detectors , for other features , in addition to detecting sonorants th - that 's what you want to go for also grad e: let 's see , i d i . e , w easiest thing would be to go do some voicing but that 's very similar to sonorance . phd a: when we when we talked with john ohala the other day we made a list of some of the things that w like frication , professor c: now this was coming at it from a different angle but maybe it 's a good way to start . , these are things which , john felt that a , a human annotator would be able to reliably mark . so the things he felt would be difficult for a human annotator to reliably mark would be tongue position kinds of things . professor c: but stress does n't , fit in this thing of coming up with features that will distinguish words from one another , it 's a it 's a good thing to mark and will probably help us ultimate with recognition phd a: there 's a few cases where it can like permit and permit . but that 's not very common in english . in other languages it 's more , important . professor c: but i either case you 'd write permit , so you 'd get the word right . phd a: no , i ' m saying , i e you were saying that stress does n't help you distinguish between words . , i see what you 're saying . as long as you get the sequence , professor c: we 're g if we 're doing if we 're talking about transcription as opposed to something else phd a: right ? , . so where it could help is maybe at a higher level . professor c: but that 's this afternoon 's meeting . we do n't understand anything in this meeting . so that 's , a neat thing grad e: s so , ohala 's going to help do these , transcriptions of the meeting data ? phd a: i . we d we did n't get that far . , we just talked about some possible features that could be marked by humans and , because of having maybe some extra transcriber time we thought we could go through and mark some portion of the data for that . and , professor c: , that 's not an immediate problem , that we do n't immediately have a lot of extra transcriber time . professor c: but but , in the long term i chuck is gon na continue the dialogue with john and , we 'll end up doing some . phd a: i ' m definitely interested in this area , too , f , acoustic feature . professor c: , it 's an interesting way to go . i say it like " said - int " . it has a number of good things . so , y you want to talk maybe a c two or three minutes about what we ' ve been talking about today and other days ? grad f: ri , ok , so , we 're interested in , methods for far mike speech recognition , mainly , methods that deal with the reverberation in the far mike signal . so , one approach would be , say msg and plp , like was used in aurora one and , there are other approaches which actually attempt to remove the reverberation , instead of being robust to it like msg . and so we 're interested in , comparing the performance of , a robust approach like msg with these , speech enhancement or de - reverber de - reverberation approaches . and , it looks like we 're gon na use the meeting recorder digits data for that . phd b: and the de - reverberation algorithm , do you have can you give some more details on this or ? does it use one microphone ? grad f: , there was something that was done by , a guy named carlos , i forget his last name , who worked with hynek , who , grad f: ok . who , , it was like rasta in the sense that of it was , de - convolution by filtering , except he used a longer time window , like a second maybe . and the reason for that is rasta 's time window is too short to , include the whole , reverberation , i what you call it the reverberation response . i if you see wh if you see what . the reverberation filter from my mouth to that mike is like it 's t got it 's too long in the time domain for the rasta filtering to take care of it . and , then there are a couple of other speech enhancement approaches which have n't been tried for speech recognition yet but have just been tried for enhancement , which , have the assumption that , you can do lpc analysis of th of the signal you get at the far microphone and the , all pole filter that you get out of that should be good . it 's just the , excitation signal that is going to be distorted by the reverberation and so you can try and reconstruct a better excitation signal and , feed that through the i , all pole filter and get enhanced speech with reverberation reduced . professor c: there 's also this , , echo cancellation that we ' ve been chasing , so , we have , and when we 're saying these digits now we do have a close microphone signal and then there 's the distant microphone signal . and you could as a baseline say , " ok , given that we have both of these , we should be able to do , a cancellation . " so that , , we , essentially identify the system in between the linear time invariant system between the microphones and re and invert it , or cancel it out to some reasonable approximation through one method or another . , that 's not a practical thing , if you have a distant mike , you do n't have a close mike ordinarily , but we thought that might make also might make a good baseline . , it still wo n't be perfect because there 's noise . , but and then there are s , there are single microphone methods that people have done for , for this de - reverberation . do y do any references to any ? cuz i w i was w i lead him down a bad path on that . phd b: i g i when people are working with single microphones , they are more trying to do phd b: , not very , there is the avendano work , but also trying to mmm , trying to f t find the de - convolution filter but in the not in the time domain but in the stream of features i . , @ there 's someone working on this on i in mons so perhaps , we should try t to he 's working on this , on trying to on re reverberation , professor c: the first paper on this is gon na have great references , tell already . it 's always good to have references , especially when reviewers read it or one of the authors and , feel they 'll " you 're ok , you ' ve r you cited me . " phd b: , he did echo cancellation and he did some fancier things like , , training different network on different reverberation conditions and then trying to find the best one , but . professor c: the oth the other thing , that dave was talking about earlier was , multiple mike things , where they 're all distant . so , , there 's all this work on arrays , but the other thing is , what can we do that 's cleverer that can take some advantage of only two mikes , particularly if there 's an obstruction between them , as we have over there . professor c: an obstruction between them . it creates a shadow which is helpful . it 's part of why you have such good directionality with , with two ears even though they 're not several feet apart . for most for most people 's heads . professor c: so that , the head , in the way , is really that 's what it 's for . it 's , professor c: it 's to separate the ears . that 's right , so . anyway , o k . , that 's all we have this week . and , it 's digit time . phd a: actually the , for some reason the digit forms are blank . , th that may be due to the fact that adam ran out of digits , and did n't have time to regenerate any . professor c: i it 's there 's no real reason to write our names on here then , phd a: cuz we put that into the " key " files . but w that 's why we have the forms , even if there are no digits . professor c: i did n't notice this . i ' m sitting here and i was about to read them too . it 's a , blank sheet of paper . ###summary: the berkeley meeting recorder group discussed the progress of several of their members. the progress being made on the group's main project , a speech recogniser for the cellular industry was reported. the group also touched upon matters that had broader implications for the work , such as the work of other groups on the same project. there were also some progress reports from group members working on other projects. no one from the group attended a recent video conference about their main project , but they need to find out what was discussed in it. until they do , they will continue on , assuming nothing major has been changed. need to discuss any new investigation with partners to make sure work is not repeated. there was a recent video conference meeting discussing the cellular project , but no one from the group attended and so do not know if it has any implications for their work , if any important decisions were made. this includes decisions on the desired latency for the system , since the group is currently at the limit. spectral subtraction , which the group is currently investigating as a method of dealing with noise , may add to the delay time , but also it is hard to do with non-linear noise. speakers mn007 and fn002 have been working on the groups main project , looking for bugs in the system , and trying to improve latency. the group's work currently has the highest latency on the project , and they are looking for ways to cut the delays. these include replacing fir filters with iir , and investigating spectral subtraction methods which do not require taking the future into account. speaker me006 has put together a proposal to extend work on a multiband system using low-level detectors , and applying it to recognition. speaker me026 has been looking at method for recognition using far mics , trying to deal with reverberation and echo-cancellation.
29
grad f: , i should n't say it 's a good mike . all i really know is that the signal level is ok . i if it 's a the quality . grad f: ugh ! so i did n't send out agenda items because until five minutes ago we only had one agenda item and now we have two . so . and , . professor b: ok . so , just to repeat the thing bef that we said last week , it was there 's this suggestion of alternating weeks on more , automatic speech recognition related or not ? was that the division ? grad f: we have n't really started , but we more or less did meeting recorder last week , so we could do , grad f: but i figure also if they 're short agenda items , we could also do a little bit of each . so . i seem to be having difficulty getting this adjusted . here we go . so , as most of you should know , i did send out the consent form thingies and , so far no one has made any ach ! any comments on them . so , no on no one has bleeped out anything . professor b: . so , w what follows ? at some point y you go around and get people to sign something ? grad f: and we had decided that they have they only needed to sign once . and the agreement that they already signed simply said that we would give them an opportunity . so as long as we do that , we 're covered . professor b: july fifteenth . , so they have a plenty of time , and y given that it 's that long , why was that date chosen ? you just felt you wanted to ? professor b: no , the only th the only mention i recall about that was just that july fifteenth or so is when this meeting starts . postdoc a: you said you wanted it to be available then . i did n't mean it to be the hard deadline . postdoc a: it 's fine with me if it is , or we cou but it might be good to remind people two weeks prior to that professor b: we probably should have talked about it , cuz i because if we wanna be able to give it to people july fifteenth , if somebody 's gon na come back and say " ok , i do n't want this and this used " , clearly we need some time to respond to that . right ? grad f: and that 's the one i used . so . but send a follow - up . , it 's almost all us . the people who are in the meeti this meeting was , these the meetings that in are in set one . phd c: was my was my response ok ? wrote you replied to the email saying they 're all fine . grad f: i we do n't my understanding of what we had upon when we had spoken about this months ago was that , we do n't actually need a reply . postdoc a: and he 's got it so that the default thing you see when you look at the page is " ok " . so that 's very clear all the way down the page , " ok " . and they have two options they can change it to . one of them is " censor " , and the other one is " incorrect " . is it is your word is " incorrect " ? which means also we get feedback on if , there 's something that they w that needs to be adjusted , because , these are very highly technical things . , it 's an added , level of checking on the accuracy of the transcription , as i see it . but in any case , people can agree to things that are wrong . so . grad f: . the reason i did that it was just so that people would not censor not ask to have removed because it was transcribed incorrectly , postdoc a: was because it , it gives them the option of , being able to correct it . approve it and correct it . and . so , you have it nicely set up so they email you and , postdoc a: - . and i wanted to say the meetings that are involved in that set are robustness and meeting recorder . the german ones will be ready for next week . those are three of those . a different set of people . and we can impose postdoc a: ok . i spoke loosely . the the german , french , the german , dutch , and spanish ones . postdoc a: it was it was not the best characterization . but what i meant to say was that it 's the other group that 's not n no m no overlap with our present members . and then maybe it 'd be good to set an explicit deadline , something like a week before that , j july fifteenth date , or two weeks before . professor b: , i would suggest we discuss , if we 're going to have a policy on it , that we discuss the length of time that we want to give people , so that we have a uniform thing . so , tha that 's a month , which is fine . , it seems grad f: , the only thing i said in the email is that the data is going to be released on the fifteenth . i did n't give any other deadline . so my feeling is if someone after the fifteenth says , " i suddenly found something " , we 'll delete it from our record . we just wo n't delete it from whatever 's already been released . grad f: what else can we do ? if someone says " hey , look , i found something in this meeting and it 's libelous and i want it removed " . what can we do ? postdoc a: i agree with that part , but that it would it , we need to have , a message to them very clearly that beyond this date , you ca n't make additional changes . professor b: i i that somebody might request something even though we say that . but it 's good to at least start some place like that . professor b: so if we , ok , how long is a reasonable amount of time for people to have if we say two weeks , or if we say a month , we should just say that , i a as , " per the , page you signed , you have the ability to look over this " and , because we w these , i would imagine some generic thing that would say " because we , will continually be making these things available to other researchers , this ca n't be open - ended and so , give us back your response within this am , within this amount of time " , whatever time we agree upon . grad f: no . ok , why do n't you do that and then make comments on what you want me to change ? professor b: no , no . i ' m not saying that you should change anything . i ' m what i ' m trying to spark a discussion hopefully among people who have read it so that you can , decide on something . so i ' m not telling you what to decide . i ' m just saying you should decide something , grad f: and if you disagree with it , why do n't you read it and give me comments on it ? professor b: , the one thing that i did read and that you just repeated to me was that you gave the specific date of july fifteenth . and you also just said that the reason you said that was because someone said it to you . so what i ' m telling you is that what you should do is come up with a length of time that you guys think is enough and you should use that rather than this date that you just got from somewhere . that 's all i ' m saying . ok ? postdoc a: i ha i have one question . this is in the summer period and presumably people may be out of town . but we can make the assumption , ca n't we ? that , they will be receiving email , most of the month . right ? because if someone professor b: it , it , you 're right . sometimes somebody will be away and , , there 's , for any length of time that you , choose there is some person sometime who will not end up reading it . that 's it 's , just a certain risk to take . phd h: , ok . alright . so . the , maybe we should say in w , when the whole thing starts , when they sign the agreement that , specify exactly , what , how they will be contacted and they can , they can be asked to give a phone number and an email address , or both . and , then phd h: right . so . a and , then , say very clearly that if they do n't if we do n't hear from them , as morgan suggested , by a certain time or after a certain period after we contact them that is implicitly giving their agreement . postdoc a: , the form does n't say , if , " if you do n't respond by x number of days or x number of weeks " phd h: i see . , ok . so what does it say about the process of , y the review process ? postdoc a: it does n't have a time limit . that you 'll be provided access to the transcripts and then , allowed to remove things that you 'd like to remove , before it goes to the general , larger audience . phd e: i ' m not as diligent as chuck , but i had the feeling i should probably respond and tell adam , like , " i got this and i will do it by this date , and if you do n't hear from me by then " , in other words responding to your email once , right away , saying " as soon as you get this could you respond . " and then if you if the person thinks they 'll need more time because they 're out of town or whatever , they can tell you at that point ? because grad f: , i did n't wanna do that , because i do n't wanna have a discussion with every person if avoid it . grad f: so what i wanted to do was just send it out and say " on the fifteenth , the data is released , if you wanna do something about it , but that 's it " . phd h: , that 's that would be great if but you should probably have a legal person look at this and make it 's ok . because if you , do this and you then there 's a dispute later and , some , someone who understands these matters concludes that they did n't have , , enough opportunity to actually exercise their right phd e: or they might never have gotten the email , because although they signed this , they by which date to expect your email . and so someone whose machine is down or whatever , we have no grad f: so let 's say someone i send this out , and someone does n't respond . do we delete every meeting that they were in ? grad f: because people do n't read their email , or they 'll read and say " i do n't care about that , i ' m not gon na delete anything " and they don just wo n't reply to it . postdoc a: except the ones who , we 're in contact with all the ones in those two groups . postdoc a: so maybe , i , that 's not that many people and if i if , i there is an advantage to having them admit and if help with processing that , i will . it 's it 's there is an advantage to having them be on record as having received the mail and indicating grad f: and so it seems like this is a little odd for it to be coming up yet again . phd e: , there 's no way to get around i it 's the same am amount of work except for an additional email just saying they got the email . and maybe it 's better legally to wonder before , a little bit earlier than grad f: morgan , can you talk to our lawyer about it , and find out what the status is on this ? cuz i do n't wanna do something that we do n't need to . grad f: because what i ' m telling you , people wo n't respond to the email . no matter what you do , you there 're gon na be people who you 're gon na have to make a lot of effort to get in contact with . grad d: it 's like signing up for a mailing list . they have opt in and opt out . and there are two different ways . , and either way works probably , . postdoc a: except i really think in this case i ' m agr i agree with liz , that we need to be in the clear and not have to after the fact say " , but i assumed " , and " , i ' m that your email address was just accumulating mail without notifying you " , professor b: if this is a purely administrative task , we can actually have administration do it . professor b: but that , i , without going through a whole expensive thing with our lawyers , from my previous conversations with them , my sense very much is that we would want something on record as indicating that they actually were aware of this . grad f: , we had talked about this before and that we had even gone by the lawyers asking about that and they said you have to s they ' ve already signed away the f with that form that they ' ve already signed once . postdoc a: i do n't remember that this issue of the time period allowed for response was ever covered . professor b: we certainly did n't talk , about with them about , the manner of them being made the , materials available . phd h: we can use it we can use a ploy like they use to , that when they serve , like , , like dead - beat dads , they make it look like they won something in the lottery and then they open the envelope phd h: and that right ? because and then the served . so you just make it , " , you won , go to this web site and you ' ve , you 're " grad f: , it 's just , we ' ve gone from one extreme to the other , where at one point , a few months ago , morgan was you were saying let 's not do anything , grad f: and now we 're saying we have to follow up each person and get a signature ? phd h: it might be the case that this is perfectly , this is enough to give us a basis t to just , assume their consent if they do n't reply . but , i ' m not , me not being a lawyer , i would n't just wanna do that without having the expert , opinion on that . postdoc a: and how many people ? al - altogether we ' ve got twenty people . these people are people who read their email almost all the time . postdoc a: i really do n't see that it 's a problem . that it 's a common courtesy to ask them , to expect for them to , be able to have @ us try to contact them , postdoc a: u just in case they had n't gotten their email . they 'd appreciate it . professor b: . my adam , my view before was about the nature of what was of the presentation , of how my the things that we 're questioning were along the lines of how easy h how m how much implication would there be that it 's likely you 're going to be changing something , as opposed to that was the dispute i was making before . professor b: but , the attorneys , i , guarantee you , the attorneys will always come back with and we have to decide how stringent we want to be in these things , but they will always come back with saying that , you need to you want to have someth some paper trail or which includes electronic trail that they have , o k 'd it . so , that if you f i if we send the email as you have and if there 's half the people , say , who do n't respond by , some period of time , we can just make a list of these people and hand it to , just give it to me and i 'll hand it to administrative staff or whatever , and they 'll just call them up and say , " have you is is this ok ? and would you mail , mail adam that it is , if i if it , is or not . " so , we can do that . phd e: the other thing that there 's a psychological effect that at least for most people , that if they ' ve responded to your email saying " yes , i will do it " or " yes , i got your email " , they 're more likely to actually do it later than to just ignore it . and we do n't want them to bleep things out , but it 's a little bit better if we 're getting the their , final response , once they ' ve answered you once than if they never answer you 'd at al . that 's how these mailing houses work . so , it 's not completely lost work because it might benefit us in terms of getting responses . , an official ok from somebody is better than no answer , even if they responded that they got your email . and they 're probably more likely to do that once they ' ve responded that they got the email . postdoc a: i also think they 'd just simply appreciate it . it 's a good way of fostering goodwill among our subjects . , our participants . professor b: the main thing is , what lawyers do is they always look at worst cases . professor b: so they s so tha - that 's what they 're paid to do . and so , it is certainly possible that , somebody 's server would be down and they would n't actually hear from us , and then they find this thing is in there and we ' ve already distributed it to someone . so , what it says in there , is that they will be given an opportunity to blah - blah , but if we sent them something or we thought we sent them something but they did n't actually receive it for some reason , then we have n't given them that . grad f: , so how far do we have to go ? do we need to get someone 's signature ? or , is email enough ? professor b: , i ' ve been through this , i ' m not a lawyer , but i ' ve been through these things a f things f like this a few times with lawyers now so i ' m pretty comfortable with that . grad f: if they do n't submit the form , it goes in the general web log . but that 's not sufficient . right ? cuz if someone just visits the web site that does n't imply anything in particular . postdoc a: - . that 's right . i could get you on the notify list if you want me to . professor b: so again , hopefully , this should n't be quite as odious a problem either way , in any of the extremes we ' ve talked about because , we 're talking a pretty small number of people . grad f: w for this set , i ' m not worried , because we know everyone on it . , they 're all more or less here or it 's eric and dan and so on . but for some of the others , you 're talking about visitors who are gone from icsi , whose email addresses may or may not work , and so what are we gon na do when we run into someone that we ca n't get in touch with ? postdoc a: i do n't think , they 're so recent , these visitors . i and i they 're also so they 're prominent enough that they 're easy to find through , i w i 'll be able to if you have any trouble finding them , i really think i could find them . professor b: . cuz it what it really does promise here is that we will ask their permission . , and , if you go into a room and close the door and ask their permission and they 're not there , it does n't seem that 's the intent of , meaning here . so . grad f: , the qu the question is just whether how active it has to be . , because they filled out a contact information and that 's where i ' m sending the information . and so far everyone has done email . there is n't anyone who did , any other contact method . professor b: , the way icsi goes , people , who , were here ten years ago still have acc have forwards to other accounts and so on . so it 's unusual that they , grad f: so my original impression was that was sufficient , that if they give us contact information and that contact information is n't accurate that we fulfilled our burden . professor b: so if we get to a boundary case like that then maybe i will call the attorney about it . but , hopefully we wo n't need to . postdoc a: i d do n't think we will . for all the reasons that we ' ve discussed . grad f: . and we 'll see how many people respond to that email . so far , two people have . professor b: . very few people will and , people see long emails about things that they do n't think is gon na be high priority , they typically , do n't read it , or half read it . cuz people are swamped . postdoc a: , i did n't anticipate this so i that 's why i did n't give this comment , and it i this discussion has made me think it might be to have a follow - up email within the next couple of days saying " , we wanna hear back from you by x date and " , and then add what liz said , respond to indicate you received this mail . professor b: , or e , maybe even additionally , , even if you ' ve decided you have no changes you 'd like to make , if you could tell us that . phd e: right . that would that would definitely work on me . , it makes you feel m like , if you were gon na p if you 're predicting that you might not answer , you have a chance now to say that . whereas , i would be much more likely myself , phd e: given all my email , t to respond at that point , saying " what , i ' m probably not gon na get to it " or whatever , rather than just having seen the email , thinking i might get to it , and never really , pushing myself to actually do it until it 's too late . phd c: . i was thinking that it also lets them know that they do n't have to go to the page to accept this . phd c: , i so that way they could they can see from that email that if they just write back and say " i got it , no changes " , they 're off the hook . they do n't have to go to the web page professor b: , the other thing i ' ve learned from dealing with people sending in reviews and , is , if you say " you ' ve got three months to do this review " , people do it , two and seven eighths months from now . professor b: if you say " you ' ve got three weeks to do this review " , they do it , two and seven eighths weeks from now they do the review . and , so , if we make it a little less time , i do n't think it 'll be that much grad f: , and also if we want it ready by the fifteenth , that means we better give them deadline of the first , if we have any prayer of actually getting everyone to respond in time . professor b: there 's the responding part and there 's also what if , , i hope this does n't happen , what if there are a bunch of deletions that have to get put in and changes ? then we actually have to deal with that grad f: my god ! i had n't thought about that . that for every meeting any meeting which has any bleeps in it we need yet another copy of . phd e: as all of these . you have to do all you could just do it in that time period , though , grad f: , but you have to copy the whole file . because we 're gon na be releasing the whole file . postdoc a: i , at a certain point , that copy that has the deletions will become the master copy . grad f: . it 's just i hate deleting any data . so i do n't want i really would rather make a copy of it , rather than bleep it out grad f: and then overlapping . so , it 's exactly a censor bleep . so what i really think is " bleep " postdoc a: but and then w i was gon na say also that the they do n't have to stay on the system , as , phd e: see , this is good . i wanted to create some side conversations in these meetings . grad f: but , ha you ' ve seen the this the speech recognition system that reversed very short segments . grad f: did you read that paper ? it would n't work . the speech recognizer still works . grad f: good point . a point . , i ' m if i sound a little peeved about this whole thing . it 's just we ' ve had meeting after meeting a on this and it seems like we ' ve never gotten it resolved . professor b: so . and , and i ' m responding without , having much knowledge , but , i am , like , one of these people who gets a gazillion mails and comes in as grad f: , and that 's exactly why i did it the way i did it , which is the default is if you do nothing we 're gon na release it . because , i have my stack of emails of to d to be done , that , fifty or sixty long , and the ones at the top i ' m never gon na get to . and , so so professor b: so so the only thing we 're missing is some way to respond to easily to say , " ok , go ahead " . grad f: . that 's actually definitely a good point . the m email does n't specify that you can just reply to the email , as op as opposed to going to the form postdoc a: and it also does n't give a specific i did n't think of it . s it 's a good idea an ex explicit time by which this will be considered definite . phd h: this , i ' ve seen this recently . , i got email , and it i if i use a mime - capable mail reader , it actually says , click on this button to confirm receipt of the mail . phd h: no , no . this is different . this is not so , i know , you can tell , the , mail delivery agent to confirm that the mail was delivered to your mailbox . phd h: but but , no . this was different . ins - in the mail , there was a phd h: , th there was a button that when you clicked on it , it would send , , a actual acknowledgement to the sender that you had actually looked at the mail . phd h: but it o but it only works for , mime - capable , if you use netscape like that for your n professor b: and we actually need a third thing . it 's not that you ' ve looked at it , it 's that you ' ve looked at it and agree with one of the possible actions . phd h: no , no . you can do that . , you can put this button anywhere you want , phd h: and you can put it the bottom of the message and say " here , by clicking on this , i agree , i acknowledge " grad f: , i could put a url in there without any difficulty and even pretty simple mime readers can do that . postdoc a: but why should n't they just email back ? i do n't see there 's a problem . phd h: so i i there 's these logos that you can put at the bottom of your web page , like " powered by vi " . phd e: or how many ? six ? but , no of different people . so i if you 're in both these types of meetings , you 'd have a lot . but how , it also depends on how many like , if we release this time it 's a fairly small number of meetings , but what if we release , like , twenty - five meetings to people ? in th grad f: , what my s expectation is , is that we 'll send out one of these emails every time a meeting has been checked and is ready . phd e: i . , ok . so this time was just the first chunk . ok . grad f: so . tha - that was my intention . it 's just that we just happened to have a bunch all at once . grad f: , maybe is that the way it 's gon na be , you think , jane ? postdoc a: i agree with you . it 's we could do it , i could i 'd be happy with either way , batch - wise what i was thinking , so this one that was exactly right , that we had a , i had wanted to get the entire set of twelve hours ready . do n't have it . but , this was the biggest clump i could do by a time where it was reasonable . people would be able to check it and still have it ready by then . my , i was thinking that with the nsa meetings , i 'd like there are three of them , and they 're , i will have them done by monday . , unfortunately the time is later and i how that 's gon na work out , but it 'd be good to have that released as a clump , too , because then , they 're they have a it 's in a category , it 's not quite so distracting to them , is what i was thinking , and it 's all in one chu but after that , when we 're caught up a bit on this process , then , i could imagine sending them out periodically as they become available . postdoc a: i could do it either way . , it 's a question of how distracting it is to the people who have to do the checking . phd c: . let 's see . we , right . so we got the transcript back from that one meeting . everything seemed fine . adam had a script that will put everything back together and there was , there was one small problem but it was a simple thing to fix . and then , we , i sent him a pointer to three more . and so he 's off and working on those . grad f: . now we have n't actually had anyone go through that meeting , to see whether the transcript is correct and to see how much was missed and all that . grad f: , the one thing i noticed is it did miss a lot of backchannels . there are a fair number of " yeahs " and " - huhs " that it 's just that are n't in there . professor b: but . like you said , that 's gon na be our standard proc that 's what the transcribers are gon na be spending most of their time doing , i would imagine , postdoc a: do you suppose that was because they were n't caught by the pre - segmenter ? , interesting . ok . grad f: . they 're they 're not in the segmented . it 's not that the ibm people did n't do it . just they did n't get marked . postdoc a: ok . so maybe when the detector for that gets better i w i there 's another issue which is this we ' ve been , contacted by university of washington now , to , we sent them the transcripts that correspond to those six meetings and they 're downloading the audio files . so they 'll be doing that . chuck 's , put that in . phd c: - . , i pointed them to the set that andreas put , on the web so th if they want to compare directly with his results they can . and , then once , th we can also point them at the , , the original meetings and they can grab those , too , with scp . phd e: no , of the transcripts . , we can talk about it off - line . grad f: there 's another meeting in here , what , at four ? , so we have to finish by three forty - five . phd h: d d so , does washi - does uw wanna u do this wanna use this data for recognition or for something else ? phd e: they 're doing w did n't they want to do language modeling on , recognition - compatible transcripts postdoc a: this is to show you , some of the things that turn up during the checking procedure . postdoc a: @ so , this is from one of the nsa meetings and , i if you 're familiar with the diff format , the arrow to the left is what it was , and the arrow to the right is what it was changed to . so , . and now the first one . ok . so , then we started a weekly meeting . bmr027adialogueact1020 185891 186198 a postdoc s^ba : s -1 0 the last time , and the transcriber thought " little too much " but , really , it was " we learned too much " , which makes more sense syntactically as . postdoc a: , she was uncertain about that . so she 's right to be uncertain . and it 's also a g a good indication of the of that . postdoc a: the next one . this was about , claudia and she 'd been really b busy with , such as waivers . , ok . , next one . this was an interesting one . so the original was " so that 's not so claudia 's not the bad master here " , and then he laughs , but it really " web master " . postdoc a: and then you see another type of uncertainty which is , they just did n't to make out of that . so instead of " split upon unknown " , it 's " split in principle " . postdoc a: no , no . these are these are our local transcriptions of the nsa meetings . postdoc a: , then you get down here . sometimes some speakers will insert foreign language terms . that 's the next example , the next one . the , version beyond this is so instead of saying " or " , especially those words , " also " and " oder " and some other ones . those sneak in . , the next one postdoc a: s , what ? discourse markers ? . , . and it 's and it makes sense postdoc a: cuz it 's , like , below this it 's a little subliminal there . ok , the next one , this is a term . the problem with terminology . description with th the transcriber has " x as an advance " . but really it 's " qs in advance " . , i ' ve benefited from some of these , cross - group meetings . ok , then you got , , instead of " from something - or - other cards " , it 's " for multicast " . and instead of " ann system related " , it 's " end system related " . this was changed to an acronym initially and it should should n't have been . and then , you can see here " gps " was misinterpreted . it 's just understanda this is this is a lot of jargon . , and the final one , the transcriber had th " in the core network itself or the exit unknown , not the internet unknown " . and it comes through as " in the core network itself of the access provider , not the internet backbone core " . now this is a lot of terminology . and they 're generally extremely good , but , in this area it really does pay to , to double check and i ' m hoping that when the checked versions are run through the recognizer that you 'll see s substantial improvements in performance cuz the , there 're a lot of these in there . postdoc a: it 's jargon . this is cuz , you do n't realize in daily life how much you have top - down influences in what you 're hearing . phd h: but but but we do n't , our language model right now does n't know about these words anyhow . so , un until you actually get a decent language model , @ adam 's right . phd h: you probably wo n't notice a difference . but it 's , it 's definitely good that these are fixed . postdoc a: , also from the standpoint of getting people 's approval , cuz if someone sees a page full of , barely decipherable w , sentences , and then is asked to approve of it or not , it 's , professor b: . that would be a shame if people said " , i do n't approve it because the it 's not what i said " . grad f: is that i was afraid people would say , " let 's censor that because it 's wrong " , and i do n't want them to do that . postdoc a: and then i also the final thing i have for transcription is that i made a purchase of some other headphones postdoc a: because of the problem of low gain in the originals . and and they very much appro they mu much prefer the new ones , and actually , that there will be fewer things to correct because of the choice . we 'd originally chosen , very expensive head headsets but , they 're just not as good as these , in this with this respect to this particular task . postdoc a: but we chose them because that 's what 's been used here by prominent projects in transcription . phd h: no , no . , just earphones ? , because i , i could use one on my workstation , just to t because sometimes i have to listen to audio files and i do n't have to b go borrow it from someone and postdoc a: we have actua actually i have w , that if we have four people come to work for a day , i was hanging on to the others for , for spares , but tell you what i recommend . phd e: , that we should order a cou , t two or three or four , actually . phd h: i have a pair that i brought from home , but it 's f just for music listening professor b: i realized something i should talk about . so what 's the other thing on the agenda actually ? grad f: , the only one was don wanted to , talk about disk space yet again . grad d: u it 's short . , if you wanna go , we can just throw it in at the end . professor b: no , no . why do n't you why do n't you go ahead since it 's short . phd e: see , if i had that little scratch - pad , i would have made an x there . grad d: . so , without thinking about it , when i offered up my hard drive last week grad d: but , no . i , i realized that we 're going to be doing a lot of experiments , o for this , paper we 're writing , so we 're probably gon na need a lot more we 're probably gon na need that disk space that we had on that eighteen gig hard drive . but , we also have someone else coming in that 's gon na help us out with some . grad d: . , i , all i need is to hang it off , like , the person who 's coming in , sonali 's , computer . phd h: , so , you mean the d the internal the disks on the machines that we just got ? grad d: so are we gon na move the off of my hard drive onto that when those come in ? grad f: if you 're if you 're desperate , i have some space on my drive . grad d: . find something if i ' m desperate and , in the meantime i 'll just hold out . that was the only thing i wanted to bring up . professor b: . i was just going to comment that i ' m going to , be on the phone with mari tomorrow , late afternoon . we 're supposed to get together and talk about , where we are on things . , there 's this meeting coming up , and there 's also an annual report . now , i never actually i was asking about this . i do n't really quite understand this . she was re she was referring to it as this actually did n't just come from her , but this is what , darpa had asked for . , she 's referring to it as the an annual report for the fiscal year . but the fiscal year starts in october , so i do n't quite understand w why we do an annual report that we 're writing in july . professor b: , it 's none of those . it 's that the meeting is in july so they so darpa just said do an annual report . so anyway , i 'll be putting together . i 'll do it , , as much as without bothering people , just by looking at papers and status reports . , the status reports you do are very helpful . , so grab there . and if , if i have some questions i 'll professor b: . if people could do it as soon as you can , if you have n't done one si recently . , but , i ' m before it 's all done , i 'll end up bugging people for more clarification about . but , i i know what people have been doing . we have these meetings and there 's the status reports . but , . . so that was n't a long one . just to tell you that . and if something has n't , i 'll be talking to her late tomorrow afternoon , and if something has n't been in a status report and you think it 's important thing to mention on this thing , just pop me a one - liner and i 'll have it in front of me for the phone conversation . i , you 're still pecking away at the demos and all that , probably . grad f: did you wanna talk about that this afternoon ? not here , but later today ? grad d: we should probably talk off - line about when we 're gon na talk off - line . professor b: , i might want to get updated about it in about a week cuz , i ' m actually gon na have a few days off the following week , a after the picnic . grad f: so we were gon na do status of speech transcription automatic transcription , but we 're running late . phd e: we should stop , like , twenty of at the latest . we we have another meeting coming in that they wanna record . professor b: . , i ' m gon na be on the phone tomorrow , so this is just a good example of the thing i 'd like to hear about . phd h: i ' m not actually , i ' m not what ? are we supposed to have done something ? grad f: ok . so now we have the schedule . so next week we 'll do automatic transcription status , plus anything that 's real timely . phd h: , the next thing on our agenda is to go back and look at the , the automatic alignments because , i got some i learned from thilo what data we can use as a benchmark to see how we 're doing on automatic alignments of the background speech or , of the foreground speech with background speech . so . phd e: the , when he can get these , before we were working with these segments that were all synchronous and that caused a lot of problems because you have timed sp at on either side . phd e: and so that 's a stage - two of trying the same kinds of alignments with the tighter boundaries with them is really the next step . phd e: we did get our , i , good news . we got our abstract accepted for this conference , workshop , isca workshop , in , , new jersey . and we sent in a very poor abstract , but they very poor , very quick . but we 're hoping to have a paper for that as , which should be an interesting phd e: the t paper is n't due until august . the abstracts were already due . so it 's that workshop . but , the good news is that will have the european experts in prosody a different crowd , and we 're the only people working on prosody in meetings so far , so that should be interesting . phd e: some generic , so it 's focused on using prosody in automatic systems and there 's a , a web page for it . grad f: i do n't have a paper but i 'd kinda like to go , if i could . is that alright ? grad f: ok . , that th hey , if that 's what it takes , that 's fine with me . i 'll pick up your dry - cleaning , too . should we do digits ?
although the meeting recorder group only list two agenda items , this meeting explores transcription , and in particular , consent forms in depth , and at times results in heated debate. with regard to obtaining consent , the group discuss the extent to which they need to attempt to contact people , which methods are most appropriate , and how much responsibility rests on participants being available and checking their e-mail regularly. the group suggest sending reminder e-mails , although since many participants are local they can be contacted by other means if necessary. transcriptions are back from ibm , and the group discuss the checking of these , particularly since the pre-segmenter has interfered with back-channel data. checking of the nsa meetings has revealed that this non-native english meeting data contains transcription inaccuracies due to the use of foreign language terms and technical vocabulary. additional topics covered more briefly in this meeting are disk space , the darpa annual report , progress with the demo , conference submissions and attendance , and requests from the university of washington for data. the group discuss whether this meeting will relate to either meeting recorder or speech recognition issues. they decide that covering such topics over alternate weeks will commence at the next meeting , although other topics will be discussed if time allows. the group decide that it would be good to set a date for having the non-native network services group data , and one or two weeks before the 15th of july is suggested. with regard to contacting participants to request consent , the group decide that no signature is required , and an e-mail would be enough. however for "boundary cases" legal advice would be sought. as soon as the next set of data is ready for checking , participants should be contacted , so that this process is on-going. when the deadline for giving consent is approaching , a reminder e-mail should be sent out. in cases where no consent response is given , participants could be chased up since many are local. original uncensored copies of meetings will be kept , with all of the old signal deleted and replaced with new when censoring. gaining consent from participants for the use of the meeting data is raising a number of issues for the group , some of which may have legal implications. firstly a relatively arbitrary deadline of 15 july has been set , and since this is during the summer break , the group debate whether enough action has been made to contact participants. if people don't respond in time , the group discuss what facility should there be for later amendments , and whether a meeting can be used if they don't respond at all. checking of the nsa meeting transcripts has shown that although acoustically fine , some errors have shown up , especially relating to foreign language terms and jargon or technical terminology. additionally , the discussions reveal that there is a shortage of headphones amongst group members , and also that disk space is in short supply , especially if original copies of edited transcripts are retained. progress has been made regarding gaining consent from participants to publish the meeting data. e-mails were sent out to request that the transcripts are checked , and corrected or censored , in time for the data to be used in the darpa meeting in july. transcriptions are back from ibm , although there was a small problem , this was simple to fix. they have not yet been checked to see whether they are correct , however , back-channels appear to be missed as these were not caught by the pre-segementer. the university of washington has been in touch with the group requesting audio files and transcripts , and also new headphones have been purchased for the transcribers which are much better than the previous ones. disk space is again filling up fast , and a new 100 gig hard drive will soon be available. as a temporary measure , the group will use up disk space on each other's machines. the annual report for darpa will be written over the next week based on project status information; the darpa demo is ongoing , with automatic alignment and tighter boundaries to be investigated. a conference abstract has been accepted.
###dialogue: grad f: , i should n't say it 's a good mike . all i really know is that the signal level is ok . i if it 's a the quality . grad f: ugh ! so i did n't send out agenda items because until five minutes ago we only had one agenda item and now we have two . so . and , . professor b: ok . so , just to repeat the thing bef that we said last week , it was there 's this suggestion of alternating weeks on more , automatic speech recognition related or not ? was that the division ? grad f: we have n't really started , but we more or less did meeting recorder last week , so we could do , grad f: but i figure also if they 're short agenda items , we could also do a little bit of each . so . i seem to be having difficulty getting this adjusted . here we go . so , as most of you should know , i did send out the consent form thingies and , so far no one has made any ach ! any comments on them . so , no on no one has bleeped out anything . professor b: . so , w what follows ? at some point y you go around and get people to sign something ? grad f: and we had decided that they have they only needed to sign once . and the agreement that they already signed simply said that we would give them an opportunity . so as long as we do that , we 're covered . professor b: july fifteenth . , so they have a plenty of time , and y given that it 's that long , why was that date chosen ? you just felt you wanted to ? professor b: no , the only th the only mention i recall about that was just that july fifteenth or so is when this meeting starts . postdoc a: you said you wanted it to be available then . i did n't mean it to be the hard deadline . postdoc a: it 's fine with me if it is , or we cou but it might be good to remind people two weeks prior to that professor b: we probably should have talked about it , cuz i because if we wanna be able to give it to people july fifteenth , if somebody 's gon na come back and say " ok , i do n't want this and this used " , clearly we need some time to respond to that . right ? grad f: and that 's the one i used . so . but send a follow - up . , it 's almost all us . the people who are in the meeti this meeting was , these the meetings that in are in set one . phd c: was my was my response ok ? wrote you replied to the email saying they 're all fine . grad f: i we do n't my understanding of what we had upon when we had spoken about this months ago was that , we do n't actually need a reply . postdoc a: and he 's got it so that the default thing you see when you look at the page is " ok " . so that 's very clear all the way down the page , " ok " . and they have two options they can change it to . one of them is " censor " , and the other one is " incorrect " . is it is your word is " incorrect " ? which means also we get feedback on if , there 's something that they w that needs to be adjusted , because , these are very highly technical things . , it 's an added , level of checking on the accuracy of the transcription , as i see it . but in any case , people can agree to things that are wrong . so . grad f: . the reason i did that it was just so that people would not censor not ask to have removed because it was transcribed incorrectly , postdoc a: was because it , it gives them the option of , being able to correct it . approve it and correct it . and . so , you have it nicely set up so they email you and , postdoc a: - . and i wanted to say the meetings that are involved in that set are robustness and meeting recorder . the german ones will be ready for next week . those are three of those . a different set of people . and we can impose postdoc a: ok . i spoke loosely . the the german , french , the german , dutch , and spanish ones . postdoc a: it was it was not the best characterization . but what i meant to say was that it 's the other group that 's not n no m no overlap with our present members . and then maybe it 'd be good to set an explicit deadline , something like a week before that , j july fifteenth date , or two weeks before . professor b: , i would suggest we discuss , if we 're going to have a policy on it , that we discuss the length of time that we want to give people , so that we have a uniform thing . so , tha that 's a month , which is fine . , it seems grad f: , the only thing i said in the email is that the data is going to be released on the fifteenth . i did n't give any other deadline . so my feeling is if someone after the fifteenth says , " i suddenly found something " , we 'll delete it from our record . we just wo n't delete it from whatever 's already been released . grad f: what else can we do ? if someone says " hey , look , i found something in this meeting and it 's libelous and i want it removed " . what can we do ? postdoc a: i agree with that part , but that it would it , we need to have , a message to them very clearly that beyond this date , you ca n't make additional changes . professor b: i i that somebody might request something even though we say that . but it 's good to at least start some place like that . professor b: so if we , ok , how long is a reasonable amount of time for people to have if we say two weeks , or if we say a month , we should just say that , i a as , " per the , page you signed , you have the ability to look over this " and , because we w these , i would imagine some generic thing that would say " because we , will continually be making these things available to other researchers , this ca n't be open - ended and so , give us back your response within this am , within this amount of time " , whatever time we agree upon . grad f: no . ok , why do n't you do that and then make comments on what you want me to change ? professor b: no , no . i ' m not saying that you should change anything . i ' m what i ' m trying to spark a discussion hopefully among people who have read it so that you can , decide on something . so i ' m not telling you what to decide . i ' m just saying you should decide something , grad f: and if you disagree with it , why do n't you read it and give me comments on it ? professor b: , the one thing that i did read and that you just repeated to me was that you gave the specific date of july fifteenth . and you also just said that the reason you said that was because someone said it to you . so what i ' m telling you is that what you should do is come up with a length of time that you guys think is enough and you should use that rather than this date that you just got from somewhere . that 's all i ' m saying . ok ? postdoc a: i ha i have one question . this is in the summer period and presumably people may be out of town . but we can make the assumption , ca n't we ? that , they will be receiving email , most of the month . right ? because if someone professor b: it , it , you 're right . sometimes somebody will be away and , , there 's , for any length of time that you , choose there is some person sometime who will not end up reading it . that 's it 's , just a certain risk to take . phd h: , ok . alright . so . the , maybe we should say in w , when the whole thing starts , when they sign the agreement that , specify exactly , what , how they will be contacted and they can , they can be asked to give a phone number and an email address , or both . and , then phd h: right . so . a and , then , say very clearly that if they do n't if we do n't hear from them , as morgan suggested , by a certain time or after a certain period after we contact them that is implicitly giving their agreement . postdoc a: , the form does n't say , if , " if you do n't respond by x number of days or x number of weeks " phd h: i see . , ok . so what does it say about the process of , y the review process ? postdoc a: it does n't have a time limit . that you 'll be provided access to the transcripts and then , allowed to remove things that you 'd like to remove , before it goes to the general , larger audience . phd e: i ' m not as diligent as chuck , but i had the feeling i should probably respond and tell adam , like , " i got this and i will do it by this date , and if you do n't hear from me by then " , in other words responding to your email once , right away , saying " as soon as you get this could you respond . " and then if you if the person thinks they 'll need more time because they 're out of town or whatever , they can tell you at that point ? because grad f: , i did n't wanna do that , because i do n't wanna have a discussion with every person if avoid it . grad f: so what i wanted to do was just send it out and say " on the fifteenth , the data is released , if you wanna do something about it , but that 's it " . phd h: , that 's that would be great if but you should probably have a legal person look at this and make it 's ok . because if you , do this and you then there 's a dispute later and , some , someone who understands these matters concludes that they did n't have , , enough opportunity to actually exercise their right phd e: or they might never have gotten the email , because although they signed this , they by which date to expect your email . and so someone whose machine is down or whatever , we have no grad f: so let 's say someone i send this out , and someone does n't respond . do we delete every meeting that they were in ? grad f: because people do n't read their email , or they 'll read and say " i do n't care about that , i ' m not gon na delete anything " and they don just wo n't reply to it . postdoc a: except the ones who , we 're in contact with all the ones in those two groups . postdoc a: so maybe , i , that 's not that many people and if i if , i there is an advantage to having them admit and if help with processing that , i will . it 's it 's there is an advantage to having them be on record as having received the mail and indicating grad f: and so it seems like this is a little odd for it to be coming up yet again . phd e: , there 's no way to get around i it 's the same am amount of work except for an additional email just saying they got the email . and maybe it 's better legally to wonder before , a little bit earlier than grad f: morgan , can you talk to our lawyer about it , and find out what the status is on this ? cuz i do n't wanna do something that we do n't need to . grad f: because what i ' m telling you , people wo n't respond to the email . no matter what you do , you there 're gon na be people who you 're gon na have to make a lot of effort to get in contact with . grad d: it 's like signing up for a mailing list . they have opt in and opt out . and there are two different ways . , and either way works probably , . postdoc a: except i really think in this case i ' m agr i agree with liz , that we need to be in the clear and not have to after the fact say " , but i assumed " , and " , i ' m that your email address was just accumulating mail without notifying you " , professor b: if this is a purely administrative task , we can actually have administration do it . professor b: but that , i , without going through a whole expensive thing with our lawyers , from my previous conversations with them , my sense very much is that we would want something on record as indicating that they actually were aware of this . grad f: , we had talked about this before and that we had even gone by the lawyers asking about that and they said you have to s they ' ve already signed away the f with that form that they ' ve already signed once . postdoc a: i do n't remember that this issue of the time period allowed for response was ever covered . professor b: we certainly did n't talk , about with them about , the manner of them being made the , materials available . phd h: we can use it we can use a ploy like they use to , that when they serve , like , , like dead - beat dads , they make it look like they won something in the lottery and then they open the envelope phd h: and that right ? because and then the served . so you just make it , " , you won , go to this web site and you ' ve , you 're " grad f: , it 's just , we ' ve gone from one extreme to the other , where at one point , a few months ago , morgan was you were saying let 's not do anything , grad f: and now we 're saying we have to follow up each person and get a signature ? phd h: it might be the case that this is perfectly , this is enough to give us a basis t to just , assume their consent if they do n't reply . but , i ' m not , me not being a lawyer , i would n't just wanna do that without having the expert , opinion on that . postdoc a: and how many people ? al - altogether we ' ve got twenty people . these people are people who read their email almost all the time . postdoc a: i really do n't see that it 's a problem . that it 's a common courtesy to ask them , to expect for them to , be able to have @ us try to contact them , postdoc a: u just in case they had n't gotten their email . they 'd appreciate it . professor b: . my adam , my view before was about the nature of what was of the presentation , of how my the things that we 're questioning were along the lines of how easy h how m how much implication would there be that it 's likely you 're going to be changing something , as opposed to that was the dispute i was making before . professor b: but , the attorneys , i , guarantee you , the attorneys will always come back with and we have to decide how stringent we want to be in these things , but they will always come back with saying that , you need to you want to have someth some paper trail or which includes electronic trail that they have , o k 'd it . so , that if you f i if we send the email as you have and if there 's half the people , say , who do n't respond by , some period of time , we can just make a list of these people and hand it to , just give it to me and i 'll hand it to administrative staff or whatever , and they 'll just call them up and say , " have you is is this ok ? and would you mail , mail adam that it is , if i if it , is or not . " so , we can do that . phd e: the other thing that there 's a psychological effect that at least for most people , that if they ' ve responded to your email saying " yes , i will do it " or " yes , i got your email " , they 're more likely to actually do it later than to just ignore it . and we do n't want them to bleep things out , but it 's a little bit better if we 're getting the their , final response , once they ' ve answered you once than if they never answer you 'd at al . that 's how these mailing houses work . so , it 's not completely lost work because it might benefit us in terms of getting responses . , an official ok from somebody is better than no answer , even if they responded that they got your email . and they 're probably more likely to do that once they ' ve responded that they got the email . postdoc a: i also think they 'd just simply appreciate it . it 's a good way of fostering goodwill among our subjects . , our participants . professor b: the main thing is , what lawyers do is they always look at worst cases . professor b: so they s so tha - that 's what they 're paid to do . and so , it is certainly possible that , somebody 's server would be down and they would n't actually hear from us , and then they find this thing is in there and we ' ve already distributed it to someone . so , what it says in there , is that they will be given an opportunity to blah - blah , but if we sent them something or we thought we sent them something but they did n't actually receive it for some reason , then we have n't given them that . grad f: , so how far do we have to go ? do we need to get someone 's signature ? or , is email enough ? professor b: , i ' ve been through this , i ' m not a lawyer , but i ' ve been through these things a f things f like this a few times with lawyers now so i ' m pretty comfortable with that . grad f: if they do n't submit the form , it goes in the general web log . but that 's not sufficient . right ? cuz if someone just visits the web site that does n't imply anything in particular . postdoc a: - . that 's right . i could get you on the notify list if you want me to . professor b: so again , hopefully , this should n't be quite as odious a problem either way , in any of the extremes we ' ve talked about because , we 're talking a pretty small number of people . grad f: w for this set , i ' m not worried , because we know everyone on it . , they 're all more or less here or it 's eric and dan and so on . but for some of the others , you 're talking about visitors who are gone from icsi , whose email addresses may or may not work , and so what are we gon na do when we run into someone that we ca n't get in touch with ? postdoc a: i do n't think , they 're so recent , these visitors . i and i they 're also so they 're prominent enough that they 're easy to find through , i w i 'll be able to if you have any trouble finding them , i really think i could find them . professor b: . cuz it what it really does promise here is that we will ask their permission . , and , if you go into a room and close the door and ask their permission and they 're not there , it does n't seem that 's the intent of , meaning here . so . grad f: , the qu the question is just whether how active it has to be . , because they filled out a contact information and that 's where i ' m sending the information . and so far everyone has done email . there is n't anyone who did , any other contact method . professor b: , the way icsi goes , people , who , were here ten years ago still have acc have forwards to other accounts and so on . so it 's unusual that they , grad f: so my original impression was that was sufficient , that if they give us contact information and that contact information is n't accurate that we fulfilled our burden . professor b: so if we get to a boundary case like that then maybe i will call the attorney about it . but , hopefully we wo n't need to . postdoc a: i d do n't think we will . for all the reasons that we ' ve discussed . grad f: . and we 'll see how many people respond to that email . so far , two people have . professor b: . very few people will and , people see long emails about things that they do n't think is gon na be high priority , they typically , do n't read it , or half read it . cuz people are swamped . postdoc a: , i did n't anticipate this so i that 's why i did n't give this comment , and it i this discussion has made me think it might be to have a follow - up email within the next couple of days saying " , we wanna hear back from you by x date and " , and then add what liz said , respond to indicate you received this mail . professor b: , or e , maybe even additionally , , even if you ' ve decided you have no changes you 'd like to make , if you could tell us that . phd e: right . that would that would definitely work on me . , it makes you feel m like , if you were gon na p if you 're predicting that you might not answer , you have a chance now to say that . whereas , i would be much more likely myself , phd e: given all my email , t to respond at that point , saying " what , i ' m probably not gon na get to it " or whatever , rather than just having seen the email , thinking i might get to it , and never really , pushing myself to actually do it until it 's too late . phd c: . i was thinking that it also lets them know that they do n't have to go to the page to accept this . phd c: , i so that way they could they can see from that email that if they just write back and say " i got it , no changes " , they 're off the hook . they do n't have to go to the web page professor b: , the other thing i ' ve learned from dealing with people sending in reviews and , is , if you say " you ' ve got three months to do this review " , people do it , two and seven eighths months from now . professor b: if you say " you ' ve got three weeks to do this review " , they do it , two and seven eighths weeks from now they do the review . and , so , if we make it a little less time , i do n't think it 'll be that much grad f: , and also if we want it ready by the fifteenth , that means we better give them deadline of the first , if we have any prayer of actually getting everyone to respond in time . professor b: there 's the responding part and there 's also what if , , i hope this does n't happen , what if there are a bunch of deletions that have to get put in and changes ? then we actually have to deal with that grad f: my god ! i had n't thought about that . that for every meeting any meeting which has any bleeps in it we need yet another copy of . phd e: as all of these . you have to do all you could just do it in that time period , though , grad f: , but you have to copy the whole file . because we 're gon na be releasing the whole file . postdoc a: i , at a certain point , that copy that has the deletions will become the master copy . grad f: . it 's just i hate deleting any data . so i do n't want i really would rather make a copy of it , rather than bleep it out grad f: and then overlapping . so , it 's exactly a censor bleep . so what i really think is " bleep " postdoc a: but and then w i was gon na say also that the they do n't have to stay on the system , as , phd e: see , this is good . i wanted to create some side conversations in these meetings . grad f: but , ha you ' ve seen the this the speech recognition system that reversed very short segments . grad f: did you read that paper ? it would n't work . the speech recognizer still works . grad f: good point . a point . , i ' m if i sound a little peeved about this whole thing . it 's just we ' ve had meeting after meeting a on this and it seems like we ' ve never gotten it resolved . professor b: so . and , and i ' m responding without , having much knowledge , but , i am , like , one of these people who gets a gazillion mails and comes in as grad f: , and that 's exactly why i did it the way i did it , which is the default is if you do nothing we 're gon na release it . because , i have my stack of emails of to d to be done , that , fifty or sixty long , and the ones at the top i ' m never gon na get to . and , so so professor b: so so the only thing we 're missing is some way to respond to easily to say , " ok , go ahead " . grad f: . that 's actually definitely a good point . the m email does n't specify that you can just reply to the email , as op as opposed to going to the form postdoc a: and it also does n't give a specific i did n't think of it . s it 's a good idea an ex explicit time by which this will be considered definite . phd h: this , i ' ve seen this recently . , i got email , and it i if i use a mime - capable mail reader , it actually says , click on this button to confirm receipt of the mail . phd h: no , no . this is different . this is not so , i know , you can tell , the , mail delivery agent to confirm that the mail was delivered to your mailbox . phd h: but but , no . this was different . ins - in the mail , there was a phd h: , th there was a button that when you clicked on it , it would send , , a actual acknowledgement to the sender that you had actually looked at the mail . phd h: but it o but it only works for , mime - capable , if you use netscape like that for your n professor b: and we actually need a third thing . it 's not that you ' ve looked at it , it 's that you ' ve looked at it and agree with one of the possible actions . phd h: no , no . you can do that . , you can put this button anywhere you want , phd h: and you can put it the bottom of the message and say " here , by clicking on this , i agree , i acknowledge " grad f: , i could put a url in there without any difficulty and even pretty simple mime readers can do that . postdoc a: but why should n't they just email back ? i do n't see there 's a problem . phd h: so i i there 's these logos that you can put at the bottom of your web page , like " powered by vi " . phd e: or how many ? six ? but , no of different people . so i if you 're in both these types of meetings , you 'd have a lot . but how , it also depends on how many like , if we release this time it 's a fairly small number of meetings , but what if we release , like , twenty - five meetings to people ? in th grad f: , what my s expectation is , is that we 'll send out one of these emails every time a meeting has been checked and is ready . phd e: i . , ok . so this time was just the first chunk . ok . grad f: so . tha - that was my intention . it 's just that we just happened to have a bunch all at once . grad f: , maybe is that the way it 's gon na be , you think , jane ? postdoc a: i agree with you . it 's we could do it , i could i 'd be happy with either way , batch - wise what i was thinking , so this one that was exactly right , that we had a , i had wanted to get the entire set of twelve hours ready . do n't have it . but , this was the biggest clump i could do by a time where it was reasonable . people would be able to check it and still have it ready by then . my , i was thinking that with the nsa meetings , i 'd like there are three of them , and they 're , i will have them done by monday . , unfortunately the time is later and i how that 's gon na work out , but it 'd be good to have that released as a clump , too , because then , they 're they have a it 's in a category , it 's not quite so distracting to them , is what i was thinking , and it 's all in one chu but after that , when we 're caught up a bit on this process , then , i could imagine sending them out periodically as they become available . postdoc a: i could do it either way . , it 's a question of how distracting it is to the people who have to do the checking . phd c: . let 's see . we , right . so we got the transcript back from that one meeting . everything seemed fine . adam had a script that will put everything back together and there was , there was one small problem but it was a simple thing to fix . and then , we , i sent him a pointer to three more . and so he 's off and working on those . grad f: . now we have n't actually had anyone go through that meeting , to see whether the transcript is correct and to see how much was missed and all that . grad f: , the one thing i noticed is it did miss a lot of backchannels . there are a fair number of " yeahs " and " - huhs " that it 's just that are n't in there . professor b: but . like you said , that 's gon na be our standard proc that 's what the transcribers are gon na be spending most of their time doing , i would imagine , postdoc a: do you suppose that was because they were n't caught by the pre - segmenter ? , interesting . ok . grad f: . they 're they 're not in the segmented . it 's not that the ibm people did n't do it . just they did n't get marked . postdoc a: ok . so maybe when the detector for that gets better i w i there 's another issue which is this we ' ve been , contacted by university of washington now , to , we sent them the transcripts that correspond to those six meetings and they 're downloading the audio files . so they 'll be doing that . chuck 's , put that in . phd c: - . , i pointed them to the set that andreas put , on the web so th if they want to compare directly with his results they can . and , then once , th we can also point them at the , , the original meetings and they can grab those , too , with scp . phd e: no , of the transcripts . , we can talk about it off - line . grad f: there 's another meeting in here , what , at four ? , so we have to finish by three forty - five . phd h: d d so , does washi - does uw wanna u do this wanna use this data for recognition or for something else ? phd e: they 're doing w did n't they want to do language modeling on , recognition - compatible transcripts postdoc a: this is to show you , some of the things that turn up during the checking procedure . postdoc a: @ so , this is from one of the nsa meetings and , i if you 're familiar with the diff format , the arrow to the left is what it was , and the arrow to the right is what it was changed to . so , . and now the first one . ok . so , then we started a weekly meeting . bmr027adialogueact1020 185891 186198 a postdoc s^ba : s -1 0 the last time , and the transcriber thought " little too much " but , really , it was " we learned too much " , which makes more sense syntactically as . postdoc a: , she was uncertain about that . so she 's right to be uncertain . and it 's also a g a good indication of the of that . postdoc a: the next one . this was about , claudia and she 'd been really b busy with , such as waivers . , ok . , next one . this was an interesting one . so the original was " so that 's not so claudia 's not the bad master here " , and then he laughs , but it really " web master " . postdoc a: and then you see another type of uncertainty which is , they just did n't to make out of that . so instead of " split upon unknown " , it 's " split in principle " . postdoc a: no , no . these are these are our local transcriptions of the nsa meetings . postdoc a: , then you get down here . sometimes some speakers will insert foreign language terms . that 's the next example , the next one . the , version beyond this is so instead of saying " or " , especially those words , " also " and " oder " and some other ones . those sneak in . , the next one postdoc a: s , what ? discourse markers ? . , . and it 's and it makes sense postdoc a: cuz it 's , like , below this it 's a little subliminal there . ok , the next one , this is a term . the problem with terminology . description with th the transcriber has " x as an advance " . but really it 's " qs in advance " . , i ' ve benefited from some of these , cross - group meetings . ok , then you got , , instead of " from something - or - other cards " , it 's " for multicast " . and instead of " ann system related " , it 's " end system related " . this was changed to an acronym initially and it should should n't have been . and then , you can see here " gps " was misinterpreted . it 's just understanda this is this is a lot of jargon . , and the final one , the transcriber had th " in the core network itself or the exit unknown , not the internet unknown " . and it comes through as " in the core network itself of the access provider , not the internet backbone core " . now this is a lot of terminology . and they 're generally extremely good , but , in this area it really does pay to , to double check and i ' m hoping that when the checked versions are run through the recognizer that you 'll see s substantial improvements in performance cuz the , there 're a lot of these in there . postdoc a: it 's jargon . this is cuz , you do n't realize in daily life how much you have top - down influences in what you 're hearing . phd h: but but but we do n't , our language model right now does n't know about these words anyhow . so , un until you actually get a decent language model , @ adam 's right . phd h: you probably wo n't notice a difference . but it 's , it 's definitely good that these are fixed . postdoc a: , also from the standpoint of getting people 's approval , cuz if someone sees a page full of , barely decipherable w , sentences , and then is asked to approve of it or not , it 's , professor b: . that would be a shame if people said " , i do n't approve it because the it 's not what i said " . grad f: is that i was afraid people would say , " let 's censor that because it 's wrong " , and i do n't want them to do that . postdoc a: and then i also the final thing i have for transcription is that i made a purchase of some other headphones postdoc a: because of the problem of low gain in the originals . and and they very much appro they mu much prefer the new ones , and actually , that there will be fewer things to correct because of the choice . we 'd originally chosen , very expensive head headsets but , they 're just not as good as these , in this with this respect to this particular task . postdoc a: but we chose them because that 's what 's been used here by prominent projects in transcription . phd h: no , no . , just earphones ? , because i , i could use one on my workstation , just to t because sometimes i have to listen to audio files and i do n't have to b go borrow it from someone and postdoc a: we have actua actually i have w , that if we have four people come to work for a day , i was hanging on to the others for , for spares , but tell you what i recommend . phd e: , that we should order a cou , t two or three or four , actually . phd h: i have a pair that i brought from home , but it 's f just for music listening professor b: i realized something i should talk about . so what 's the other thing on the agenda actually ? grad f: , the only one was don wanted to , talk about disk space yet again . grad d: u it 's short . , if you wanna go , we can just throw it in at the end . professor b: no , no . why do n't you why do n't you go ahead since it 's short . phd e: see , if i had that little scratch - pad , i would have made an x there . grad d: . so , without thinking about it , when i offered up my hard drive last week grad d: but , no . i , i realized that we 're going to be doing a lot of experiments , o for this , paper we 're writing , so we 're probably gon na need a lot more we 're probably gon na need that disk space that we had on that eighteen gig hard drive . but , we also have someone else coming in that 's gon na help us out with some . grad d: . , i , all i need is to hang it off , like , the person who 's coming in , sonali 's , computer . phd h: , so , you mean the d the internal the disks on the machines that we just got ? grad d: so are we gon na move the off of my hard drive onto that when those come in ? grad f: if you 're if you 're desperate , i have some space on my drive . grad d: . find something if i ' m desperate and , in the meantime i 'll just hold out . that was the only thing i wanted to bring up . professor b: . i was just going to comment that i ' m going to , be on the phone with mari tomorrow , late afternoon . we 're supposed to get together and talk about , where we are on things . , there 's this meeting coming up , and there 's also an annual report . now , i never actually i was asking about this . i do n't really quite understand this . she was re she was referring to it as this actually did n't just come from her , but this is what , darpa had asked for . , she 's referring to it as the an annual report for the fiscal year . but the fiscal year starts in october , so i do n't quite understand w why we do an annual report that we 're writing in july . professor b: , it 's none of those . it 's that the meeting is in july so they so darpa just said do an annual report . so anyway , i 'll be putting together . i 'll do it , , as much as without bothering people , just by looking at papers and status reports . , the status reports you do are very helpful . , so grab there . and if , if i have some questions i 'll professor b: . if people could do it as soon as you can , if you have n't done one si recently . , but , i ' m before it 's all done , i 'll end up bugging people for more clarification about . but , i i know what people have been doing . we have these meetings and there 's the status reports . but , . . so that was n't a long one . just to tell you that . and if something has n't , i 'll be talking to her late tomorrow afternoon , and if something has n't been in a status report and you think it 's important thing to mention on this thing , just pop me a one - liner and i 'll have it in front of me for the phone conversation . i , you 're still pecking away at the demos and all that , probably . grad f: did you wanna talk about that this afternoon ? not here , but later today ? grad d: we should probably talk off - line about when we 're gon na talk off - line . professor b: , i might want to get updated about it in about a week cuz , i ' m actually gon na have a few days off the following week , a after the picnic . grad f: so we were gon na do status of speech transcription automatic transcription , but we 're running late . phd e: we should stop , like , twenty of at the latest . we we have another meeting coming in that they wanna record . professor b: . , i ' m gon na be on the phone tomorrow , so this is just a good example of the thing i 'd like to hear about . phd h: i ' m not actually , i ' m not what ? are we supposed to have done something ? grad f: ok . so now we have the schedule . so next week we 'll do automatic transcription status , plus anything that 's real timely . phd h: , the next thing on our agenda is to go back and look at the , the automatic alignments because , i got some i learned from thilo what data we can use as a benchmark to see how we 're doing on automatic alignments of the background speech or , of the foreground speech with background speech . so . phd e: the , when he can get these , before we were working with these segments that were all synchronous and that caused a lot of problems because you have timed sp at on either side . phd e: and so that 's a stage - two of trying the same kinds of alignments with the tighter boundaries with them is really the next step . phd e: we did get our , i , good news . we got our abstract accepted for this conference , workshop , isca workshop , in , , new jersey . and we sent in a very poor abstract , but they very poor , very quick . but we 're hoping to have a paper for that as , which should be an interesting phd e: the t paper is n't due until august . the abstracts were already due . so it 's that workshop . but , the good news is that will have the european experts in prosody a different crowd , and we 're the only people working on prosody in meetings so far , so that should be interesting . phd e: some generic , so it 's focused on using prosody in automatic systems and there 's a , a web page for it . grad f: i do n't have a paper but i 'd kinda like to go , if i could . is that alright ? grad f: ok . , that th hey , if that 's what it takes , that 's fine with me . i 'll pick up your dry - cleaning , too . should we do digits ? ###summary: although the meeting recorder group only list two agenda items , this meeting explores transcription , and in particular , consent forms in depth , and at times results in heated debate. with regard to obtaining consent , the group discuss the extent to which they need to attempt to contact people , which methods are most appropriate , and how much responsibility rests on participants being available and checking their e-mail regularly. the group suggest sending reminder e-mails , although since many participants are local they can be contacted by other means if necessary. transcriptions are back from ibm , and the group discuss the checking of these , particularly since the pre-segmenter has interfered with back-channel data. checking of the nsa meetings has revealed that this non-native english meeting data contains transcription inaccuracies due to the use of foreign language terms and technical vocabulary. additional topics covered more briefly in this meeting are disk space , the darpa annual report , progress with the demo , conference submissions and attendance , and requests from the university of washington for data. the group discuss whether this meeting will relate to either meeting recorder or speech recognition issues. they decide that covering such topics over alternate weeks will commence at the next meeting , although other topics will be discussed if time allows. the group decide that it would be good to set a date for having the non-native network services group data , and one or two weeks before the 15th of july is suggested. with regard to contacting participants to request consent , the group decide that no signature is required , and an e-mail would be enough. however for "boundary cases" legal advice would be sought. as soon as the next set of data is ready for checking , participants should be contacted , so that this process is on-going. when the deadline for giving consent is approaching , a reminder e-mail should be sent out. in cases where no consent response is given , participants could be chased up since many are local. original uncensored copies of meetings will be kept , with all of the old signal deleted and replaced with new when censoring. gaining consent from participants for the use of the meeting data is raising a number of issues for the group , some of which may have legal implications. firstly a relatively arbitrary deadline of 15 july has been set , and since this is during the summer break , the group debate whether enough action has been made to contact participants. if people don't respond in time , the group discuss what facility should there be for later amendments , and whether a meeting can be used if they don't respond at all. checking of the nsa meeting transcripts has shown that although acoustically fine , some errors have shown up , especially relating to foreign language terms and jargon or technical terminology. additionally , the discussions reveal that there is a shortage of headphones amongst group members , and also that disk space is in short supply , especially if original copies of edited transcripts are retained. progress has been made regarding gaining consent from participants to publish the meeting data. e-mails were sent out to request that the transcripts are checked , and corrected or censored , in time for the data to be used in the darpa meeting in july. transcriptions are back from ibm , although there was a small problem , this was simple to fix. they have not yet been checked to see whether they are correct , however , back-channels appear to be missed as these were not caught by the pre-segementer. the university of washington has been in touch with the group requesting audio files and transcripts , and also new headphones have been purchased for the transcribers which are much better than the previous ones. disk space is again filling up fast , and a new 100 gig hard drive will soon be available. as a temporary measure , the group will use up disk space on each other's machines. the annual report for darpa will be written over the next week based on project status information; the darpa demo is ongoing , with automatic alignment and tighter boundaries to be investigated. a conference abstract has been accepted.
21
phd e: so it 's , it 's spectral subtraction or wiener filtering , depending on if we put if we square the transfer function or not . phd e: and then with over - estimation of the noise , depending on the , the snr , with smoothing along time , smoothing along frequency . phd e: it 's very simple , smoothing things . and , the best result is when we apply this procedure on fft bins , with a wiener filter . and there is no noise addition after that . phd e: so it 's good because it 's difficult when we have to add noise to find the right level . phd e: . so the sh it 's the sheet that gives fifty - f three point sixty - six . , the second sheet is abo , about the same . it 's the same , idea but it 's working on mel bands , and it 's a spectral subtraction instead of wiener filter , and there is also a noise addition after , cleaning up the mel bins . mmm . , the results are similar . professor b: . , it 's actually , very similar . , if you look at databases , the , one that has the smallest smaller overall number is actually better on the finnish and spanish , but it is , worse on the , aurora professor b: , . so , it probably does n't matter that much either way . but , when you say u , unified do you mean , it 's one piece of software now , or ? phd e: so now we are , setting up the software . , it should be ready , very soon . , and we professor b: ok . so a week ago maybe you were n't around when hynek and guenther and i ? professor b: , ok . so , let 's summarize . and then if i summarize somebody can tell me if i ' m wrong , which will also be possibly helpful . what did press here ? i hope this is still working . professor b: we , we looked at , { nonvocalsound } anyway we after coming back from qualcomm we had , very strong feedback and , it was hynek and guenter 's and my opinion also that , , we spread out to look at a number of different ways of doing noise suppression . but given the limited time , it was time to choose one . professor b: , and so , th the vector taylor series had n't really worked out that much . , the subspace , had not been worked with so much . , so it came down to spectral subtraction versus wiener filtering . , we had a long discussion about how they were the same and how they were d , completely different . and , , fundamentally they 're the same thing but the math is a little different so that there 's a there 's an exponent difference in the index , what 's the ideal filtering , and depending on how you construct the problem . and , i it 's sort , after that meeting it made more sense to me because , if you 're dealing with power spectra then how are you gon na choose your error ? and typically you 'll do choose something like a variance . and so that means it 'll be something like the square of the power spectra . whereas when you 're doing the , , looking at it the other way , you 're gon na be dealing with signals professor b: and you 're gon na end up looking at power , noise power that you 're trying to reduce . and so , so there should be a difference of , conceptually of , a factor of two in the exponent . but there 're so many different little factors that you adjust in terms of , , over - subtraction and , that arguably , you 're c and the choice of do you operate on the mel bands or do you operate on the fft beforehand . there 're so many other choices to make that are almost , if not independent , certainly in addition to the choice of whether you , do spectral subtraction or wiener filtering , that , @ again we felt the gang should just figure out which it is they wanna do and then let 's pick it , go forward with it . so that 's that was last week . and and , we said , take a week , go arm wrestle , figure it out . , and th the joke there was that each of them had specialized in one of them . professor b: and and so they so instead they went to yosemite and bonded , and they came out with a single piece of software . so it 's another victory for international collaboration . phd a: so so you guys have combined or you 're going to be combining the software ? phd c: like you can parse command - line arguments . so depending on that , it becomes either spectral subtraction or wiener filtering . so , ye professor b: , that 's fine , but the important thing is that there is a piece of software that you that we all will be using now . phd e: , we can do it later . but , still so , there will be a piece of software with , will give this system , the fifty - three point sixty - six , by default and professor b: but we were considerably far behind . and , this does n't have neural net in yet . ? so it so , it 's not using our full bal bag of tricks , if you will . and , and it is , very close in performance to the best thing that was there before . , but , looking at it another way , maybe more importantly , we did n't have any explicit noise , handling stationary dealing with e we did n't explicitly have anything to deal with stationary noise . and now we do . phd a: so will the neural net operate on the output from either the wiener filtering or the spectral subtraction ? or will it operate on the original ? professor b: , so argu arguably , what we should do , i gather you have it sounds like you have a few more days of nailing things down with the software and so on . but and then but , arguably what we should do is , even though the software can do many things , we should for now pick a set of things , th these things i would , and not change that . and then focus on everything that 's left . and , that our goal should be by next week , when hynek comes back , to , really just to have a firm path , for the time he 's gone , of , what things will be attacked . but i would thought think that what we would wanna do is not futz with this for a while because what 'll happen is we 'll change many other things in the system , and then we 'll probably wanna come back to this and possibly make some other choices . but , . phd a: but just conceptually , where does the neural net go ? do do you wanna h run it on the output of the spectrally subtracted ? professor b: , depending on its size , one question is , is it on the , server side or is it on the terminal side ? , if it 's on the server side , it you probably do n't have to worry too much about size . so that 's an argument for that . we do still , however , have to consider its latency . so the issue is , , could we have a neural net that only looked at the past ? professor b: , what we ' ve done in the past is to use the neural net , to transform , all of the features that we use . so this is done early on . this is essentially , i it 's more or less like a spee a speech enhancement technique here right ? where we 're just creating new if not new speech at least new fft 's that have , which could be turned into speech , that have some of the noise removed . , after that we still do a mess of other things to produce a bunch of features . and then those features are not now currently transformed by the neural net . and then the way that we had it in our proposal - two before , we had the neural net transformed features and we had the untransformed features , which i you actually did linearly transform with the klt , professor b: but , to orthogonalize them but they were not , processed through a neural net . and stephane 's idea with that , as i recall , was that you 'd have one part of the feature vector that was very discriminant and another part that was n't , which would smooth things a bit for those occasions when , the testing set was quite different than what you 'd trained your discriminant features for . so , all of that is , still seems like a good idea . now we know some other constraints . we ca n't have unlimited amounts of latency . , y , that 's still being debated by the by people in europe but , no matter how they end up there , it 's not going to be unlimited amounts , so we have to be a little conscious of that . so there 's the neural net issue . there 's the vad issue . and , there 's the second stream thing . and those that we last time we that those are the three things that have to get , focused on . phd a: and so the w the default , boundaries that they provide are they 're ok , but they 're not all that great ? professor b: i they still allow two hundred milliseconds on either side or some ? is that what the deal is ? phd e: , so th , they keep two hundred milliseconds at the beginning and end of speech . and they keep all the phd e: and all the speech pauses , which is sometimes on the speechdat - car you have pauses that are more than one or two seconds . more than one second for . and , it seems to us that this way of just dropping the beginning and end is not we cou we can do better , because , with this way of dropping the frames they improve over the baseline by fourteen percent and sunil already showed that with our current vad we can improve by more than twenty percent . phd e: so , our current vad is more than twenty percent , while their is fourteen . phd e: so . and another thing that we did also is that we have all this training data for let 's say , for speechdat - car . we have channel zero which is clean , channel one which is far - field microphone . and if we just take only the , vad probabilities computed on the clean signal and apply them on the far - field , test utterances , then results are much better . in some cases it divides the error rate by two . so it means that there are stim still phd e: . it 's not a median filtering . it 's just we do n't take the median value . we take something , so we have eleven , frames . professor b: so , i was just noticing on this that it makes reference to delay . so what 's the ? if you ignore , the vad is in parallel , is n't i is n't it , with the ? , it is n't additive with the , lda and the wiener filtering , and . phd c: . so so what happened right now , we removed the delay of the lda . so we , if so if we if so which is like if we reduce the delay of va so , the f the final delay 's now ba is f determined by the delay of the vad , because the lda does n't have any delay . so if we re if we reduce the delay of the vad , it 's like effectively reducing the delay . phd c: so the lda and the vad both had a hundred millisecond delay . so and they were in parallel , so which means you pick either one of them the biggest , whatever . so , right now the lda delays are more . professor b: and there did n't seem to be any , penalty for that ? there did n't seem to be any penalty for making it causal ? phd c: pardon ? , no . it actually made it , like , point one percent better , actually . professor b: so that 's really not bad . so we may we 'll see what they decide . we may have , the , latency time available for to have a neural net . , sounds like we probably will . that 'd be good . cuz i cuz it certainly always helped us before . professor b: , they 're disputing it . , they 're saying , one group is saying a hundred and thirty milliseconds and another group is saying two hundred and fifty milliseconds . two hundred and fifty is what it was before actually . so , some people are lobbying to make it shorter . and , . phd a: were you thinking of the two - fifty or the one - thirty when you said we should have enough for the neural net ? professor b: , it just it when we find that out it might change exactly how we do it , is all . , how much effort do we put into making it causal ? , the neural net will probably do better if it looks at a little bit of the future . but , it will probably work to some extent to look only at the past . and we ha , limited machine and human time , and effort . and , how much time should we put into that ? so it 'd be helpful if we find out from the standards folks whether , they 're gon na restrict that or not . but , at this point our major concern is making the performance better and , if , something has to take a little longer in latency in order to do it that 's , a secondary issue . but if we get told otherwise then , we may have to c clamp down a bit more . phd c: so , the one difference is that was there is like we tried computing the delta and then doing the frame - dropping . phd c: the earlier system was do the frame - dropping and then compute the delta on the so this phd c: so the frame - dropping is the last thing that we do . so , what we do is we compute the silence probability , convert it to that binary flag , and then in the end you c up upsample it to match the final features number of phd c: it seems to be helping on the - matched condition . so that 's why this improvement i got from the last result . so . and it actually r reduced a little bit on the high mismatch , so in the final weightage it 's b better because the - matched is still weighted more than professor b: so , @ , you were doing a lot of changes . did you happen to notice how much , the change was due to just this frame - dropping problem ? what about this ? phd e: just the frame - dropping problem . but it 's difficult . sometime we change two things together and but it 's around maybe it 's less than one percent . it professor b: . but like we 're saying , if there 's four or five things like that then pretty sho soon you 're talking real improvement . phd e: . and it and then we have to be careful with that also with the neural net because in the proposal the neural net was also , working on after frame - dropping . phd e: mmm . , we can do the frame - dropping on the server side or we can just be careful at the terminal side to send a couple of more frames before and after , and so . it 's ok . phd a: you have , so when you , maybe i do n't quite understand how this works , but , could n't you just send all of the frames , but mark the ones that are supposed to be dropped ? cuz you have a bunch more bandwidth . professor b: , you could . , it always seemed to us that it would be to in addition to , reducing insertions , actually use up less bandwidth . phd a: and that way the net could use if the net 's on the server side then it could use all of the frames . phd c: yes , it could be . it 's , like , you mean you just transferred everything and then finally drop the frames after the neural net . that 's that 's one thing which phd c: . right now we are , ri right now what wha what we did is , like , we just mark we just have this additional bit which goes around the features , saying it 's currently a it 's a speech or a nonspeech . phd c: so there is no frame - dropping till the final features , like , including the deltas are computed . and after the deltas are computed , you just pick up the ones that are marked silence and then drop them . professor b: so it would be more or less the same thing with the neural net , i , actually . professor b: so , what 's , ? that 's that 's a good set of work that , phd c: just one more thing . like , should we do something f more for the noise estimation , because we still ? professor b: . i was wondering about that . that was i had written that down there . phd e: so , we , actually i did the first experiment . this is with just fifteen frames . we take the first fifteen frame of each utterance to it , and average their power spectra . i tried just plugging the , , guenter noise estimation on this system , and it , it got worse . but i did n't play with it . but - . , i did n't do much more for noise estimation . tried this , professor b: . , it 's not surprising it 'd be worse the first time . but , it does seem like , i some compromise between always depending on the first fifteen frames and a always depending on a pause is a good idea . , maybe you have to weight the estimate from the first - teen fifteen frames more heavily than was done in your first attempt . but but professor b: no , do you have any way of assessing how or how poorly the noise estimation is currently doing ? phd c: is there was there any experiment with ? , i did the only experiment where i tried was i used the channel zero vad for the noise estimation and frame - dropping . so i do n't have a split , like which one helped more . so . it it was the best result i could get . so , that 's the professor b: so that 's something you could do with , this final system . just do this everything that is in this final system except , use the channel zero . professor b: if it 's , essentially not better , then it 's probably not worth any more . phd c: . but the guenter 's argument is slightly different . it 's , like , ev even if i use a channel zero vad , i ' m just averaging the s power spectrum . but the guenter 's argument is , like , if it is a non - stationary segment , then he does n't update the noise spectrum . so he 's , like he tries to capture only the stationary part in it . so the averaging is , like , different from updating the noise spectrum only during stationary segments . so , th the guenter was arguing that , even if you have a very good vad , averaging it , like , over the whole thing is not a good idea . phd c: because you 're averaging the stationary and the non - stationary , and finally you end up getting something which is not really the s because , you anyway , you ca n't remove the stationary part fr , non - stationary part from the signal . phd c: so . so you just update only doing or update only the stationary components . so , that 's so that 's still a slight difference from what guenter is trying professor b: , . and and also there 's just the fact that , , although we 're trying to do very on this evaluation , we actually would like to have something that worked in general . and , relying on having fifteen frames at the front is pretty professor b: so , . , it 'd certainly be more robust to different kinds of input if you had at least some updates . , i . what what do you , what do you guys see as being what you would be doing in the next week , given wha what 's happened ? phd e: so , should we keep the same ? we might try to keep the same idea of having a neural network , but training it on more data and adding better features , but because the current network is just plp features . , it 's trained on noisy plp phd e: plp features computed on noisy speech . but there is no nothing particularly robust in these features . phd a: so , i do n't remember what you said the answer to my , question earlier . will you will you train the net on after you ' ve done the spectral subtraction or the wiener filtering ? phd c: so that vad was trained on the noisy features . so , right now we have , like , we have the cleaned - up features , so we can have a better vad by training the net on the cleaned - up speech . phd c: , but we need a vad for noise estimation also . so it 's , like , where do we want to put the vad ? , it 's like phd a: can you use the same net that you that i was talking about to do the vad ? phd c: , it actually comes at v at the very end . so the net the final net , which is the feature net so that actually comes after a chain of , like , lda plus everything . so it 's , like , it takes a long time to get a decision out of it . and and you can actually do it for final frame - dropping , but not for the va - f noise estimation . professor b: you see , the idea is that the , initial decision to that you 're in silence or speech happens pretty quickly . professor b: and that . and that 's fed forward , and you say " , flush everything , it 's not speech anymore " . professor b: , it is used , it 's only used f , it 's used for frame - dropping . , it 's used for end of utterance because , there 's if you have more than five hundred milliseconds of nonspeech then you figure it 's end of utterance like that . phd e: and it seems important for , like , the on - line normalization . we do n't want to update the mean and variance during silen long silence portions . so it has to be done before this mean and variance normalization . professor b: . so probably the vad and maybe testing out the noise estimation a little bit . , keeping the same method but , seeing if you cou but , noise estimation could be improved . those are related issues . it probably makes sense to move from there . and then , later on in the month we wanna start including the neural net at the end . ok . anything else ? professor b: . you did n't fall . that 's good . our e our effort would have been devastated if you guys had run into problems . professor b: , that 's the plan . i the week after he 'll be , going back to europe , and so we wanna professor b: no , no . he 's he 's dropped into the us . . so , . , the idea was that , we 'd sort out where we were going next with this work before he , left on this next trip . good . , barry , you just got through your quals , so i if you have much to say . grad d: no , just , looking into some of the things that , , john ohala and hynek , gave as feedback , as a starting point for the project . in in my proposal , i was thinking about starting from a set of , phonological features , or a subset of them . , but that might not be necessarily a good idea according to , john . he said , , these phonological features are figments of imagination also . s professor b: in conversational speech in particular . you can put them in pretty reliably in synthetic speech . professor b: but we do n't have too much trouble recognizing synthetic speech since we create it in the first place . so , it 's grad d: right . so , a better way would be something more data - driven , just looking at the data and seeing what 's similar and what 's not similar . so , i ' m , taking a look at some of , sangita 's work on traps . she did something where , w where the traps learn she clustered the temporal patterns of , certain phonemes in m averaged over many , many contexts . and , some things tended to cluster . right ? , like stop consonants clustered really . , silence was by its own self . and , , v vocalic was clustered . and , so , those are interesting things to phd a: so you 're now you 're looking to try to gather a set of these types of features ? grad d: just to see where i could start off from , ? a a set of small features and continue to iterate and find , a better set . professor b: ok . , short meeting . that 's ok . so next week hopefully we 'll can get hynek here to join us and ,
icsi's meeting recorder group have returned from a meeting with some important decisions to make. they have developed a piece of software which allows them to implement their two main approaches to dealing with noise. the base rate is currently set at the second best rate as of the last project evaluation , and it does not yet include everything the group have been working on. with this in mind , they have decided to set most things , and concentrate on studying only a few key aspects , the neural network , the voice activity detector , and the noise estimation. by the time a senior member of their research partners ogi returns , they want to have a firm plan of what they will be doing. system latency is still an issue , but limits have still not been set by the project heads. the group have encountered problems with frame-dropping , and will need to bear that in mind since their neural network would come after that stage. while deciding which of two approaches to finally adopt , the group put together one piece of software for all to use that implements both spectral subtraction and wiener filtering. speaker me026 has done his quals , and is looking at some of the feedback he received.
###dialogue: phd e: so it 's , it 's spectral subtraction or wiener filtering , depending on if we put if we square the transfer function or not . phd e: and then with over - estimation of the noise , depending on the , the snr , with smoothing along time , smoothing along frequency . phd e: it 's very simple , smoothing things . and , the best result is when we apply this procedure on fft bins , with a wiener filter . and there is no noise addition after that . phd e: so it 's good because it 's difficult when we have to add noise to find the right level . phd e: . so the sh it 's the sheet that gives fifty - f three point sixty - six . , the second sheet is abo , about the same . it 's the same , idea but it 's working on mel bands , and it 's a spectral subtraction instead of wiener filter , and there is also a noise addition after , cleaning up the mel bins . mmm . , the results are similar . professor b: . , it 's actually , very similar . , if you look at databases , the , one that has the smallest smaller overall number is actually better on the finnish and spanish , but it is , worse on the , aurora professor b: , . so , it probably does n't matter that much either way . but , when you say u , unified do you mean , it 's one piece of software now , or ? phd e: so now we are , setting up the software . , it should be ready , very soon . , and we professor b: ok . so a week ago maybe you were n't around when hynek and guenther and i ? professor b: , ok . so , let 's summarize . and then if i summarize somebody can tell me if i ' m wrong , which will also be possibly helpful . what did press here ? i hope this is still working . professor b: we , we looked at , { nonvocalsound } anyway we after coming back from qualcomm we had , very strong feedback and , it was hynek and guenter 's and my opinion also that , , we spread out to look at a number of different ways of doing noise suppression . but given the limited time , it was time to choose one . professor b: , and so , th the vector taylor series had n't really worked out that much . , the subspace , had not been worked with so much . , so it came down to spectral subtraction versus wiener filtering . , we had a long discussion about how they were the same and how they were d , completely different . and , , fundamentally they 're the same thing but the math is a little different so that there 's a there 's an exponent difference in the index , what 's the ideal filtering , and depending on how you construct the problem . and , i it 's sort , after that meeting it made more sense to me because , if you 're dealing with power spectra then how are you gon na choose your error ? and typically you 'll do choose something like a variance . and so that means it 'll be something like the square of the power spectra . whereas when you 're doing the , , looking at it the other way , you 're gon na be dealing with signals professor b: and you 're gon na end up looking at power , noise power that you 're trying to reduce . and so , so there should be a difference of , conceptually of , a factor of two in the exponent . but there 're so many different little factors that you adjust in terms of , , over - subtraction and , that arguably , you 're c and the choice of do you operate on the mel bands or do you operate on the fft beforehand . there 're so many other choices to make that are almost , if not independent , certainly in addition to the choice of whether you , do spectral subtraction or wiener filtering , that , @ again we felt the gang should just figure out which it is they wanna do and then let 's pick it , go forward with it . so that 's that was last week . and and , we said , take a week , go arm wrestle , figure it out . , and th the joke there was that each of them had specialized in one of them . professor b: and and so they so instead they went to yosemite and bonded , and they came out with a single piece of software . so it 's another victory for international collaboration . phd a: so so you guys have combined or you 're going to be combining the software ? phd c: like you can parse command - line arguments . so depending on that , it becomes either spectral subtraction or wiener filtering . so , ye professor b: , that 's fine , but the important thing is that there is a piece of software that you that we all will be using now . phd e: , we can do it later . but , still so , there will be a piece of software with , will give this system , the fifty - three point sixty - six , by default and professor b: but we were considerably far behind . and , this does n't have neural net in yet . ? so it so , it 's not using our full bal bag of tricks , if you will . and , and it is , very close in performance to the best thing that was there before . , but , looking at it another way , maybe more importantly , we did n't have any explicit noise , handling stationary dealing with e we did n't explicitly have anything to deal with stationary noise . and now we do . phd a: so will the neural net operate on the output from either the wiener filtering or the spectral subtraction ? or will it operate on the original ? professor b: , so argu arguably , what we should do , i gather you have it sounds like you have a few more days of nailing things down with the software and so on . but and then but , arguably what we should do is , even though the software can do many things , we should for now pick a set of things , th these things i would , and not change that . and then focus on everything that 's left . and , that our goal should be by next week , when hynek comes back , to , really just to have a firm path , for the time he 's gone , of , what things will be attacked . but i would thought think that what we would wanna do is not futz with this for a while because what 'll happen is we 'll change many other things in the system , and then we 'll probably wanna come back to this and possibly make some other choices . but , . phd a: but just conceptually , where does the neural net go ? do do you wanna h run it on the output of the spectrally subtracted ? professor b: , depending on its size , one question is , is it on the , server side or is it on the terminal side ? , if it 's on the server side , it you probably do n't have to worry too much about size . so that 's an argument for that . we do still , however , have to consider its latency . so the issue is , , could we have a neural net that only looked at the past ? professor b: , what we ' ve done in the past is to use the neural net , to transform , all of the features that we use . so this is done early on . this is essentially , i it 's more or less like a spee a speech enhancement technique here right ? where we 're just creating new if not new speech at least new fft 's that have , which could be turned into speech , that have some of the noise removed . , after that we still do a mess of other things to produce a bunch of features . and then those features are not now currently transformed by the neural net . and then the way that we had it in our proposal - two before , we had the neural net transformed features and we had the untransformed features , which i you actually did linearly transform with the klt , professor b: but , to orthogonalize them but they were not , processed through a neural net . and stephane 's idea with that , as i recall , was that you 'd have one part of the feature vector that was very discriminant and another part that was n't , which would smooth things a bit for those occasions when , the testing set was quite different than what you 'd trained your discriminant features for . so , all of that is , still seems like a good idea . now we know some other constraints . we ca n't have unlimited amounts of latency . , y , that 's still being debated by the by people in europe but , no matter how they end up there , it 's not going to be unlimited amounts , so we have to be a little conscious of that . so there 's the neural net issue . there 's the vad issue . and , there 's the second stream thing . and those that we last time we that those are the three things that have to get , focused on . phd a: and so the w the default , boundaries that they provide are they 're ok , but they 're not all that great ? professor b: i they still allow two hundred milliseconds on either side or some ? is that what the deal is ? phd e: , so th , they keep two hundred milliseconds at the beginning and end of speech . and they keep all the phd e: and all the speech pauses , which is sometimes on the speechdat - car you have pauses that are more than one or two seconds . more than one second for . and , it seems to us that this way of just dropping the beginning and end is not we cou we can do better , because , with this way of dropping the frames they improve over the baseline by fourteen percent and sunil already showed that with our current vad we can improve by more than twenty percent . phd e: so , our current vad is more than twenty percent , while their is fourteen . phd e: so . and another thing that we did also is that we have all this training data for let 's say , for speechdat - car . we have channel zero which is clean , channel one which is far - field microphone . and if we just take only the , vad probabilities computed on the clean signal and apply them on the far - field , test utterances , then results are much better . in some cases it divides the error rate by two . so it means that there are stim still phd e: . it 's not a median filtering . it 's just we do n't take the median value . we take something , so we have eleven , frames . professor b: so , i was just noticing on this that it makes reference to delay . so what 's the ? if you ignore , the vad is in parallel , is n't i is n't it , with the ? , it is n't additive with the , lda and the wiener filtering , and . phd c: . so so what happened right now , we removed the delay of the lda . so we , if so if we if so which is like if we reduce the delay of va so , the f the final delay 's now ba is f determined by the delay of the vad , because the lda does n't have any delay . so if we re if we reduce the delay of the vad , it 's like effectively reducing the delay . phd c: so the lda and the vad both had a hundred millisecond delay . so and they were in parallel , so which means you pick either one of them the biggest , whatever . so , right now the lda delays are more . professor b: and there did n't seem to be any , penalty for that ? there did n't seem to be any penalty for making it causal ? phd c: pardon ? , no . it actually made it , like , point one percent better , actually . professor b: so that 's really not bad . so we may we 'll see what they decide . we may have , the , latency time available for to have a neural net . , sounds like we probably will . that 'd be good . cuz i cuz it certainly always helped us before . professor b: , they 're disputing it . , they 're saying , one group is saying a hundred and thirty milliseconds and another group is saying two hundred and fifty milliseconds . two hundred and fifty is what it was before actually . so , some people are lobbying to make it shorter . and , . phd a: were you thinking of the two - fifty or the one - thirty when you said we should have enough for the neural net ? professor b: , it just it when we find that out it might change exactly how we do it , is all . , how much effort do we put into making it causal ? , the neural net will probably do better if it looks at a little bit of the future . but , it will probably work to some extent to look only at the past . and we ha , limited machine and human time , and effort . and , how much time should we put into that ? so it 'd be helpful if we find out from the standards folks whether , they 're gon na restrict that or not . but , at this point our major concern is making the performance better and , if , something has to take a little longer in latency in order to do it that 's , a secondary issue . but if we get told otherwise then , we may have to c clamp down a bit more . phd c: so , the one difference is that was there is like we tried computing the delta and then doing the frame - dropping . phd c: the earlier system was do the frame - dropping and then compute the delta on the so this phd c: so the frame - dropping is the last thing that we do . so , what we do is we compute the silence probability , convert it to that binary flag , and then in the end you c up upsample it to match the final features number of phd c: it seems to be helping on the - matched condition . so that 's why this improvement i got from the last result . so . and it actually r reduced a little bit on the high mismatch , so in the final weightage it 's b better because the - matched is still weighted more than professor b: so , @ , you were doing a lot of changes . did you happen to notice how much , the change was due to just this frame - dropping problem ? what about this ? phd e: just the frame - dropping problem . but it 's difficult . sometime we change two things together and but it 's around maybe it 's less than one percent . it professor b: . but like we 're saying , if there 's four or five things like that then pretty sho soon you 're talking real improvement . phd e: . and it and then we have to be careful with that also with the neural net because in the proposal the neural net was also , working on after frame - dropping . phd e: mmm . , we can do the frame - dropping on the server side or we can just be careful at the terminal side to send a couple of more frames before and after , and so . it 's ok . phd a: you have , so when you , maybe i do n't quite understand how this works , but , could n't you just send all of the frames , but mark the ones that are supposed to be dropped ? cuz you have a bunch more bandwidth . professor b: , you could . , it always seemed to us that it would be to in addition to , reducing insertions , actually use up less bandwidth . phd a: and that way the net could use if the net 's on the server side then it could use all of the frames . phd c: yes , it could be . it 's , like , you mean you just transferred everything and then finally drop the frames after the neural net . that 's that 's one thing which phd c: . right now we are , ri right now what wha what we did is , like , we just mark we just have this additional bit which goes around the features , saying it 's currently a it 's a speech or a nonspeech . phd c: so there is no frame - dropping till the final features , like , including the deltas are computed . and after the deltas are computed , you just pick up the ones that are marked silence and then drop them . professor b: so it would be more or less the same thing with the neural net , i , actually . professor b: so , what 's , ? that 's that 's a good set of work that , phd c: just one more thing . like , should we do something f more for the noise estimation , because we still ? professor b: . i was wondering about that . that was i had written that down there . phd e: so , we , actually i did the first experiment . this is with just fifteen frames . we take the first fifteen frame of each utterance to it , and average their power spectra . i tried just plugging the , , guenter noise estimation on this system , and it , it got worse . but i did n't play with it . but - . , i did n't do much more for noise estimation . tried this , professor b: . , it 's not surprising it 'd be worse the first time . but , it does seem like , i some compromise between always depending on the first fifteen frames and a always depending on a pause is a good idea . , maybe you have to weight the estimate from the first - teen fifteen frames more heavily than was done in your first attempt . but but professor b: no , do you have any way of assessing how or how poorly the noise estimation is currently doing ? phd c: is there was there any experiment with ? , i did the only experiment where i tried was i used the channel zero vad for the noise estimation and frame - dropping . so i do n't have a split , like which one helped more . so . it it was the best result i could get . so , that 's the professor b: so that 's something you could do with , this final system . just do this everything that is in this final system except , use the channel zero . professor b: if it 's , essentially not better , then it 's probably not worth any more . phd c: . but the guenter 's argument is slightly different . it 's , like , ev even if i use a channel zero vad , i ' m just averaging the s power spectrum . but the guenter 's argument is , like , if it is a non - stationary segment , then he does n't update the noise spectrum . so he 's , like he tries to capture only the stationary part in it . so the averaging is , like , different from updating the noise spectrum only during stationary segments . so , th the guenter was arguing that , even if you have a very good vad , averaging it , like , over the whole thing is not a good idea . phd c: because you 're averaging the stationary and the non - stationary , and finally you end up getting something which is not really the s because , you anyway , you ca n't remove the stationary part fr , non - stationary part from the signal . phd c: so . so you just update only doing or update only the stationary components . so , that 's so that 's still a slight difference from what guenter is trying professor b: , . and and also there 's just the fact that , , although we 're trying to do very on this evaluation , we actually would like to have something that worked in general . and , relying on having fifteen frames at the front is pretty professor b: so , . , it 'd certainly be more robust to different kinds of input if you had at least some updates . , i . what what do you , what do you guys see as being what you would be doing in the next week , given wha what 's happened ? phd e: so , should we keep the same ? we might try to keep the same idea of having a neural network , but training it on more data and adding better features , but because the current network is just plp features . , it 's trained on noisy plp phd e: plp features computed on noisy speech . but there is no nothing particularly robust in these features . phd a: so , i do n't remember what you said the answer to my , question earlier . will you will you train the net on after you ' ve done the spectral subtraction or the wiener filtering ? phd c: so that vad was trained on the noisy features . so , right now we have , like , we have the cleaned - up features , so we can have a better vad by training the net on the cleaned - up speech . phd c: , but we need a vad for noise estimation also . so it 's , like , where do we want to put the vad ? , it 's like phd a: can you use the same net that you that i was talking about to do the vad ? phd c: , it actually comes at v at the very end . so the net the final net , which is the feature net so that actually comes after a chain of , like , lda plus everything . so it 's , like , it takes a long time to get a decision out of it . and and you can actually do it for final frame - dropping , but not for the va - f noise estimation . professor b: you see , the idea is that the , initial decision to that you 're in silence or speech happens pretty quickly . professor b: and that . and that 's fed forward , and you say " , flush everything , it 's not speech anymore " . professor b: , it is used , it 's only used f , it 's used for frame - dropping . , it 's used for end of utterance because , there 's if you have more than five hundred milliseconds of nonspeech then you figure it 's end of utterance like that . phd e: and it seems important for , like , the on - line normalization . we do n't want to update the mean and variance during silen long silence portions . so it has to be done before this mean and variance normalization . professor b: . so probably the vad and maybe testing out the noise estimation a little bit . , keeping the same method but , seeing if you cou but , noise estimation could be improved . those are related issues . it probably makes sense to move from there . and then , later on in the month we wanna start including the neural net at the end . ok . anything else ? professor b: . you did n't fall . that 's good . our e our effort would have been devastated if you guys had run into problems . professor b: , that 's the plan . i the week after he 'll be , going back to europe , and so we wanna professor b: no , no . he 's he 's dropped into the us . . so , . , the idea was that , we 'd sort out where we were going next with this work before he , left on this next trip . good . , barry , you just got through your quals , so i if you have much to say . grad d: no , just , looking into some of the things that , , john ohala and hynek , gave as feedback , as a starting point for the project . in in my proposal , i was thinking about starting from a set of , phonological features , or a subset of them . , but that might not be necessarily a good idea according to , john . he said , , these phonological features are figments of imagination also . s professor b: in conversational speech in particular . you can put them in pretty reliably in synthetic speech . professor b: but we do n't have too much trouble recognizing synthetic speech since we create it in the first place . so , it 's grad d: right . so , a better way would be something more data - driven , just looking at the data and seeing what 's similar and what 's not similar . so , i ' m , taking a look at some of , sangita 's work on traps . she did something where , w where the traps learn she clustered the temporal patterns of , certain phonemes in m averaged over many , many contexts . and , some things tended to cluster . right ? , like stop consonants clustered really . , silence was by its own self . and , , v vocalic was clustered . and , so , those are interesting things to phd a: so you 're now you 're looking to try to gather a set of these types of features ? grad d: just to see where i could start off from , ? a a set of small features and continue to iterate and find , a better set . professor b: ok . , short meeting . that 's ok . so next week hopefully we 'll can get hynek here to join us and , ###summary: icsi's meeting recorder group have returned from a meeting with some important decisions to make. they have developed a piece of software which allows them to implement their two main approaches to dealing with noise. the base rate is currently set at the second best rate as of the last project evaluation , and it does not yet include everything the group have been working on. with this in mind , they have decided to set most things , and concentrate on studying only a few key aspects , the neural network , the voice activity detector , and the noise estimation. by the time a senior member of their research partners ogi returns , they want to have a firm plan of what they will be doing. system latency is still an issue , but limits have still not been set by the project heads. the group have encountered problems with frame-dropping , and will need to bear that in mind since their neural network would come after that stage. while deciding which of two approaches to finally adopt , the group put together one piece of software for all to use that implements both spectral subtraction and wiener filtering. speaker me026 has done his quals , and is looking at some of the feedback he received.
35
grad f: let 's see . so . what ? i ' m supposed to be on channel five ? her . nope . does n't seem to be , grad d: sibilance . three , three . i am three . see , that matches the seat up there . so . grad d: cuz it 's that starts counting from zero and these start counting from one . ergo , the classic off - by - one error . grad b: yes , you ' ve bested me again . that 's how of our continuing interaction . damn ! foiled again ! grad d: so is keith showing up ? he 's talking with george right now . , is he gon na get a rip himself away from that ? grad e: , he was very affirmative in his way of saying he will be here at four . but , that was before he knew about that george lecture probably . professor c: right . this this is not it 's not bad for the project if keith is talking to george . ok . so my suggestion is we just grad e: , i had informal talks with most of you . so , eva just reported she 's really happy about the cbt 's being in the same order in the xml as in the be java declaration format grad e: the , java the embedded bayes wants to take input , a bayes - net in some java notation and eva is using the xalan style sheet processor to convert the xml that 's output by the java bayes for the into the , e bayes input . grad f: actually , maybe i could try , like , emailing the guy and see if he has any something already . that 'd be weird , that he has both the java bayes and the embedded bayes in professor c: he charges so much . right . no , it 's a good idea that you may as ask . grad e: and , pretty mu on t on the top of my list , i would have asked keith how the " where is x ? " hand parse is standing . but we 'll skip that . , there 's good news from johno . the generation templates are done . grad d: so the trees for the xml trees for the gene for the synthesizer are written . so need to do the , write a new set of tree combining rules . but those 'll be pretty similar to the old ones . just gon na be grad e: ok , so natural language generation produces not a just a surface string that is fed into a text - to - speech but , a surface string with a syntax tree that 's fed into a concept - to - speech . grad e: now and this concept - to - speech module has certain rules on how if you get the following syntactic structure , how to map this onto prosodic rules . and fey has foolheartedly to rewrite , the german concept syntax - to - prosody rules grad e: into english . and therefore the , if it 's ok that we give her a couple of more hours per week , then she 'll do that . grad e: no , i my is i asked for a commented version of that file ? if we get that , then it 's doable , even without getting into it , even though the scheme li , is really documented in the festival . grad d: , i if you 're not used to functional programming , scheme can be completely incomprehensible . cuz , there 's no like there 's lots of unnamed functions professor c: anyway , it we 'll sort this out . but anyway , send me the note and then i 'll - i 'll check with , morgan on the money . i do n't anticipate any problem but we have to ask . , so this was { nonvocalsound } , on the generation thing , if sh y she 's really going to do that , then we should be able to get prosody as . so it 'll say it 's nonsense with perfect intonation . grad d: are we gon na can we change the voice of the thing , because right now the voice sounds like a murderer . grad e: it is , we have the choice between the , usual festival voices , which i already told the smartkom people we are n't gon na use because they 're really bad . grad e: ogi has , crafted a couple of diphone type voices that are really and we 're going to use that . we can still , d agree on a gender , if we want . so we still have male or female . grad b: whatever sounds best . unfortunately , probably male voices , a bit more research on . professor c: it turns out there 's the long - standing links with these guys in the speech group . professor c: , there 's this guy who 's got a joint appointment , hynek hermansky . he 's - spends a fair amount of time here . anyway . leave it . wo n't be a problem . grad e: ok . and it 's probably also uninteresting for all of you to , learn that as of twenty minutes ago , david and i , per accident , managed to get the whole smartkom system running on the , icsi linux machines with the icsi nt machines thereby increasing the number of running smartkom systems in this house from one on my laptop to three . grad e: , i suggested to try something that was really even though against better knowledge should n't have worked , but it worked . intuition . grad e: and , we 'll never found out why . it - it 's just like why the generation ma the presentation manager is now working ? grad a: ! this is something you ha you get used to as a programmer , right ? grad e: so , the people at saarbruecken and i decided not to touch it ever again . , that would work . i was gon na ask you where something is and what we know about that . grad e: where is x ? , but by , we can ask , did you get to read all four hundred words ? grad d: i wa i was looking at it . it does n't follow logically . it does n't the first paragraph does n't seem to have any link to the second paragraph . professor c: but c the meeting looks like it 's , it 's gon na be good . so . it 's professor c: , i ra i ran across it in i do n't even know where , some just some weird place . and , , i ' m surprised i did n't know about it professor c: right , or some so but anyway , i so i did see that . wha . before we get started on this st so i also had a email correspondence with daphne kohler , who said yes she would love to work with us on the , , using these structured belief - nets and but starting in august , that she 's also got a new student working on this and that we should get in touch with them again in august and then we 'll figure out a way for you to get connected with , their group . so that 's , looks pretty good . and , i 'll say it now . so , and it looks to me like we 're now at a good point to do something start working on something really hard . we ' ve been so far working on things that are easy . , w which is mental spaces and - or professor c: ? it 's a hard puzzle . but the other part of it is the way they connect to these , probabilistic relational models . so there 's all the problems that the linguists know about , about mental spaces , and the cognitive linguists know about , but then there 's this problem of the belief - net people have only done a moderately good job of dealing with temporal belief - nets . , which they call dynamic they incorrectly call dynamic belief - nets . professor c: so there 's a term " dynamic belief - net " , does n't mean that . it means time slices . and srini used those and people use them . but one of the things i w would like to do over the next , month , it may take more , is to st understand to what extent we can not only figure out the constructions for them for multiple worlds and what the formalism will look like and where the slots and fillers will be , but also what that would translate into in terms of belief - net and the inferences . so the story is that if you have these probabilistic relational models , they 're set up , in principle , so that you can make new instances and instances connect to each other , and all that , so it should be feasible to set them up in such a way that if you ' ve got the past tense and the present tense and each of those is a separate , belief structure that they do their inferences with just the couplings that are appropriate . but that 's g that 's , as far as tell , it 's putting together two real hard problems . one is the linguistic part of what are the couplings and when you have a certain , construction , that implies certain couplings and other couplings , between let 's say between the past and the present , or any other one of these things and then we have this inference problem of exactly technically how does the belief - net work if it 's got , let 's say one in , different tenses or my beliefs and your beliefs , or any of these other ones of multiple models . , in the long run we need to solve both of those and my suggestion is that we start digging into them both , in a way we that , th hopefully turns out to be consistent , so that the and sometimes it 's actually easier to solve two hard problems than one because they constrain each other . if you ' ve got huge ra huge range of possible choices we 'll see . but anyway , so that 's , grad a: , like , i solved the problem of we were talking about how do you various issues of how come a plural noun gets to quote " count as a noun phrase " , occur as an argument of a higher construction , but a bare singular stem does n't get to act that way . , and it would take a really long time to explain it now , but i ' m about to write it up this evening . i solved that at the same time as " how do we keep adjectives from floating to the left of determiners and how do we keep all of that from floating outside the noun phrase " to get something like " i the kicked dog " . did it did it at once . professor c: no , i know , i th that is gon na be the key to this wh to th the big project of the summer of getting the constructions right is that people do manage to do this so there probably are some , relatively clean rules , they 're just not context - free trees . and if we if the formalism is good , then we should be able to have , moderate scale thing . and that is , keith , what i encouraged george to be talking with you about . not the formalism yet professor c: but the phenomena . the p , another thing , there was this , thing that nancy to in a weak moment this morning that professor c: anyway , that we were that we 're gon na try to get a , first cut at the revised formalism by the end of next week . professor c: , just trying to write up essentially what you guys have worked out so that everybody has something to look at . we ' ve talked about it , but only the innermost inner group currently , professor c: that th there 's one of the advantages of a document , right ? , is that it actually transfers from head to head . so anyway . professor c: communication , documentation and . anyway , so , with a little luck l let 's , let 's have that as a goal anyway . grad a: so , what was the date there ? monday or ? it 's a friday . professor c: no , no . no , w we 're talking about a week fr e end of next week . professor c: now if it turns out that effort leads us into some big hole that 's fine . , if you say we 're dump . there 's a really hard problem we have n't solved yet that , that 's just fine . grad a: but at least try and work out what the state of the art is right now . professor c: right , t if to the extent that we have it , let 's write it and to the extent we do n't , let 's find out what we need to do . grad e: can we ? is it worth thinking of an example out of our tourism thing domain , that involves a decent mental space shift or setting up professor c: it is , but i interrupted before keith got to tell us what happened with " where is the powder - tower ? " or whatever grad a: . , what was supposed to happen ? i ' ve been actually caught up in some other ones , so , , i do n't have a write - up of or i have n't elaborated on the ideas that we were already talking about which were grad e: , . i think we already came to the conclusion that we have two alternative paths that we two alternative ways of representing it . one is a has a grad a: it 's gone . the question of whether the polysemy is like in the construction or pragmatic . grad a: it has to be the second case . , so d ' you is it clear what we 're talking about here ? grad a: the question is whether the construction is semantic or like ambiguous between asking for location and asking for path . grad a: but pragmatically that 's construed as meaning " tell me how to get there " . grad e: so assume these are two , nodes we can observe in the bayes - net . so these are either true or false and it 's also just true or false . if we encounter a phrase such as " where is x ? " , should that set this to true and this to true , and the bayes - net figures out which under the c situation in general is more likely ? , or should it just activate this , have this be false , and the bayes - net figures out whether this actually now means ? professor c: ok , so that 's a separate issue . so i a i th i agree with you that , it 's a disaster to try to make separate constructions for every , pragmatic reading , although there are some that will need to be there . professor c: you ca n't do that either . but , c almost certainly " can you pass the salt " is a construction worth noting that there is this th this grad a: so right , this one is maybe in the gray area . is it is it like that or is it just obvious from world knowledge that no one you would n't want to know the location without wanting to know how to get there or whatever . grad e: one or in some cases , it 's quite definitely s so that you just know wanna know where it is . professor c: and i , see , the more important thing at this stage is that we should be able to know how we would handle it in ei f in the short run it 's more important to know how we would treat technically what we would do if we decided a and what we would do if we decided b , than it is t to decide a or b r right now . grad b: which one it is . cuz there will be other k examples that are one way or the other . right . professor c: w we know for that we have to be able to do both . so i in the short run , let 's be real clear on h what the two alternatives would be . grad e: and then the we had another idea floating around , which we wanted to , get your input on , and that concerns the but w we would have a person that would like to work on it , and that 's ir - irina gurevich from eml who is going to be visiting us , the week before , august and a little bit into august . and she would like to apply the ontology that is , being crafted at eml . that 's not the one i sent you . the one i sent you was from gmd , out of a european crumpet . grad e: , and one of the reas one of the those ideas was , so , back to the old johno observation that if y if you have a dialogue history and it said the word " admission fee " was , mentioned , it 's more likely that the person actually wants to enter than just take a picture of it from the outside . now what could imagine to , have a list for each construction of things that one should look up in the discourse history , ? that 's the really stupid way . then there is the really clever way that was suggested by keith and then there is the , middle way that i ' m suggesting and that is you get x , which is whatever , the castle . the ontology will tell us that castles have opening hours , that they have admission fees , they have whatever . and then , this is we go via a thesaurus and look up certain linguistic surface structures that are related to these concepts and feed those through the dialogue history and check dynamically for each e entity . we look it up check whether any of these were mentioned and then activate the corresponding nodes on the discourse side . but keith suggested that a much cleaner way would be is , to keep track of the discourse in such a way that you if that something like that ha has been mentioned before , this just a continues to add up , in th in a grad a: so if someone mentions admission f fees , that activates an enter schema which sticks around for a little while in your rep in the representation of what 's being talked about . and then when someone asks " where is x ? " you ' ve already got the enter schema activated grad d: , is it does n't it seem like if you just managed the dialogue history with a thread , that , kept track of ho of the activity of , cuz it would the thread would nodes like , needed to be activated , so it could just keep track of how long it 's been since something 's been mentioned , and automatically load it in . professor c: you could do that . but here 's a way in th in the bl bayes - net you could think about it this way , that if at the time " admissions fee " was mentioned you could increase the probability that someone wanted to enter . grad d: we - th that 's what i wa i was n't i was i was n't thinking in terms of enter schemas . i was just professor c: fair enough , ok , but , in terms of the c the current implementation right ? so that professor c: th that th the conditional probability that someone so at the time you mentioned it this is this is essentially the bayes - net equivalent of the spreading activation . it 's in some ways it 's not as good but it 's the implementation we got . professor c: we do n't have a connectionist implementation . now my is that it 's not a question of time but it is a question of whether another intervening object has been mentioned . professor c: , we could look at dialo this is the other thing we ha we do is , is we have this data coming which probably will blow all our theories , professor c: but skipping that so but my is what 'll probably will happen , here 's a here 's a proposed design . is that there 're certain constructions which , for our purposes do change the probabilities of eva decisions and various other kinds th that the , standard way that the these contexts work is stack - like or whatever , but that 's the most recent thing . and so it could be that when another , en tourist entity gets mentioned , you professor c: re essentially re - initiali , re - i essentially re - initialize the state . and i if we had a fancier one with multiple worlds you could have , you could keep track of what someone was saying about this and that . , " i wanna go in the morning grad a: here 's my plan for today . bed014cdialogueact644 157354 157383 c professor s^e%-:s%- -1 0 i wanna here 's my plan for tomorrow . " professor c: in the afternoon to the powder - tower , tal so i ' m talking about shopping and then you say , , " what 's it cost ? " . so one could imagine , but not yet . but i do th think that the it 'll turn out that it 's gon na be depend on whether there 's been an override . grad e: , if you ask " how much does a train ride and cinema around the vineyards cost ? " and then somebody tells you it 's sixty dollars and then you say " ok how much is , i would like to visit the " whatever , something completely different , " then i go to , point reyes " , it 's not more likely that you want to enter anything , but it 's , a complete rejection of entering by doing that . grad b: so when you admit have admission fee and it changes something , it 's only for that particular it 's relational , right ? it 's only for that particular object . professor c: , i th , and the simple idea is that it 's on it 's only for m for the current , tourist e entity of instre interest . grad e: . but that 's this function , so , has the current object been mentioned in with a question about concerning its professor c: no , no . it 's it it goes the other d it goes in the other direction . is when th when the this is mentioned , the probability of , let 's say , entering changes grad d: you could just hav , just , ob it it observes an er , it sets the a node for " entered " or " true " , professor c: now , but ro - robert 's right , that to determine that , ok ? you may want to go through a th thesaurus professor c: so , if the issue is , if so now th this construction has been matched and you say " ok . does this actually have any implications for our decisions ? " then there 's another piece of code that presumably does that computation . professor c: . but but what 's robert 's saying is , and he 's right , is you do n't want to try to build into the construction itself all the synonyms and all , all the wo maybe . i 'll have to think about that . i . it th thi think of arguments in either direction on that . but somehow you want to do it . grad e: - . , it 's just another , construction side is how to get at the possible inferences we can draw from the discourse history or changing of the probabilities , and - or grad b: it 's like i g the other thing is , whether you have a m user model that has , whatever , a current plan , whatever , plans that had been discussed , and i , grad d: what , what 's the argument for putting it in the construction ? is it just that the s synonym selection is better , or ? professor c: , wel , the ar the the argument is that you 're gon na have the if you ' ve recognized the word , which means you have a lexical construction for it , so you could just as tag the lexical construction with the fact that it 's a , thirty percent increase in probability of entering . you so you could invert the whole thing , so you s you tag that information on to the lexicon professor c: since you had to recognize it anyway . that that 's the argument in the other direction . at , and this is grad e: even though the lexical construction itself out of context , wo n't do it . , y you have to keep track whether the person says but i but i ' m not interested in the opening times is a more a v type . grad e: so . but , we 'll , we have time to this is a s just a sidetrack , but it 's also something that people have not done before , is , abuse an ontology for these kinds of , inferences , on whether anything relevant to the current something has been , has crept up in the dialogue history already , or not . and , i have the , if we wanted to have that function in the dialogue hi dialogue module of smartkom , i have the written consent of jan to put it in there . grad e: yes , . that 's , i ' m keeping on good terms with jan . professor c: you ' ve noticed that . so , it 's very likely that robert 's thesis is going to be along these lines , professor c: and the local rules are if it 's your thesis , you get to decide how it 's done . ok . so if , if this is , if this becomes part of your thesis , you can say , hey we 're gon na do it this way , that 's the way it 's done . grad b: yay , it 's not me . it 's always me when it 's someone 's thesis . professor c: no , no ! no , no . we ' ve got a lot we ' ve got a lot of theses going . grad e: , let 's talk after friday the twenty - ninth . then we 'll see how f professor c: right . so h he 's got a th he 's got a meet meeting in germany with his thesis advisor . professor c: , right . so , that 's the other thing . , this is , speaking of hard problems , this is a very good time , to start trying to make explicit where construal comes in and , where c where the construction per - se ends and where construal comes in , professor c: right . so . right . so thing that 's part of why we want the formalism , is because th it is gon na have implicit in it grad d: , but it he the decisions i made wer had to do with my thesis . so consequently do n't i get to decide then that it 's robert 's job ? grad b: , i 'll just pick a piece of the problem and then just push the hard into the center and say it 's robert 's . like . grad e: i ' ve always been completely in favor of consensus decisions , so we 'll find a way . grad e: it it might even be interesting then to say that i should be forced to , pull some of the ideas that have been floating in my head out of the , out of the top hat professor c: ri - no . so , wh you had you ha you had done one draft . professor c: i this is i ' m shocked . this is the first time i ' ve seen a thesis proposal change . right . anyway , . so . professor c: but , a second that would be great . so , a sec you 're gon na need it anyway . grad e: , and i would like to d discuss it and , get you guys 's input and make it bomb - proof . professor c: so that , so th thi this , so this is the point , is we 're going to have to cycle through this , but th the draft of the p proposal on the constructions is going to tell us a lot about what we think needs to be done by construal . and , we oughta be doing it . grad e: , we need some then we need to make some dates . meeting regular meeting time for the summer , we really have n't found one . we did thursdays one for a while . talked to ami . it 's - it 's a coincidence that he ca n't do could n't do it today here . professor c: , you were n't here , but s , and so , if that 's ok with you , grad e: mmm . and , . how do we feel about doing it wednesdays ? because it seems to me that this is a time where when we have things to discuss with other people , there they seem to be s tons of people around . professor c: those people who might not be around so much . , i do n't care . i have no fixed grad a: to tell you the truth , i 'd rath i 'd , i 'd would like to avoid more than one icsi meeting per day , if possible . but . i . whatever . grad e: , if one thing is , this room is taken at after three - thirty pr every day by the data collection . so we have subjects anyway except for this week , we have subjects in here . that 's why it was one . so we just knew i grad e: no , he can . so let 's say thursday one . but for next week , this is a bit late . so i would suggest that we need to talk grad b: could we do thursday at one - thirty ? would that be horrible ? really ? grad b: , ok . you did n't tell me that . ok , that 's fine . grad e: w , actually we w we did scrap our monday time just because bhaskara could n't come monday . grad d: although you wanted to go camping on monday er , take off mondays a lot so you could go camping . grad e: get a fresh start , that 's another s thing . but , . , there are also usually then holidays anyways . like sometimes it works out that way . grad b: , the linguists ' meeting i happens to be at two , but that 's . grad a: right ? so . and , nancy and i are just always talking anyway and sometimes we do it in that room . so , . grad e: ok , so l forget about the b the camping thing . so let 's , any other problems w ? but , i suggested monday . if that 's a problem for me then i should n't suggest it . professor c: earlier we at least for next week , there 's a lot of we want to get done , so why do n't we plan to meet monday and we 'll see if we want to meet any more than that . professor c: here i ' m blissfully agreeing to things and realizing that i actually do have some scheduled on monday . grad b: y you 'll come and take all the headph the good headphones first and then remind me . grad b: fine . yes . would you like to ? ok . i was actually gon na work on it for tomorrow like this weekend . grad e: i wo i would like i would get a notion of what you guys have in store for me . professor c: m @ , w maybe mond - maybe we can put this is part of what we can do monday , if we want . grad b: , so there was like , m in my head the goal to have like an intermediate version , like , everything i know . and then , w i would talk to you and figure out everything , that , see if they 're consistent . grad a: why do n't w maybe you and i should meet more or less first thing monday morning and then we can work on this . grad b: that 's fine . so we might continue our email thing and that might be fine , too . so , maybe i 'll send you some grad a: , if you have time after this i 'll show you the noun phrase thing . grad e: so the idea is on monday at two we 'll see an intermediate version of the formalism for the constructions , grad e: so it wo n't be , like , a for semi - formal presentation of my proposal . it 'll be more like towards finalizing that proposal . grad a: someday we also have to we should probably talk about the other side of the " where is x " construction , which is the issue of , how do you simulate questions ? what does the simspec look like for a question ? because it 's a little different . grad a: we had to we had an idea for this which seemed like it would probably work . professor c: great . ok . simspec may need we may n need to re - name that . professor c: ok ? so let 's think of a name for whatever the this intermediate structure is . , we talked about semspec , for " semantic spec specification " grad b: all the old like graphs , just change the just , like , mark out the professor c: anyway , so let 's for the moment call it that until we think of something better . and , we need to find part of what was missing were markings of all sorts that were n't in there , incl including the questions we did n't we never did figure out how we were gon na do emphasis in , the semspec . grad b: , we ' ve talked a little bit about that , too , which , it 's hard for me to figure out with our general linguistic issues , how they map onto this particular one , but ok , understood . professor c: but that 's part of the formalism is got to be , how things like that get marked . grad b: w do you have data , like the you have preliminary data ? cuz i know , we ' ve been using this one easy sentence and i ' m you guys have , maybe you are the one who ' ve been looking at the rest of it grad b: it 'd be useful for me , if we want to have it a little bit more data oriented . grad a: to tell you the truth , what i ' ve been looking at has not been the data so far , grad a: said " alright let 's see if get noun phrases and , major verb co , constructions out of the way first . " and i have not gotten them out of the way yet . surprise . so , i have not really approached a lot of the data , but like these the question one , since we have this idea about the indefinite pronoun thing and all that , i ca can try and , run with that , try and do some of the sentence constructions now . it would make sense . grad e: so mary fixed the car with a wrench . so you perform the mental sum and then , " who fixed the car with a wrench ? " you are told , to do this in the in analogously to the way you would do " someone fixed the car with a wrench " . and then you hand it back to your hippocampus and find out what that , grad a: the wh question has this as extra thing which says " and when you 're done , tell me who fills that slot " or w . and , this is a way to do it , the idea of saying that you treat from the simulation point of view or whatever you treat , wh constructions similarly to , indefinite pronouns like " someone fixed the car " because lots of languages , have wh questions with an indefinite pronoun in situ or whatever , grad a: and you just get intonation to tell you that it 's a question . so it makes sense grad a: it makes sense from that point of view , too , which is actually better . grad a: anyway , but just that thing and we 'll figure out exactly how to write that up and so on , but , no , all the focus . we just dropped that cuz it was too weird and we did n't even know , like , what we were talking about exactly , what the object of study was . professor c: . , if , i part of what the exercise is , t by the end of next week , is to say what are the things that we just do n't have answers for yet . that 's fine . grad e: , if you do wanna discuss focus background and then get me into that because , i wo i w scientifically worked on that for almost two years . grad b: , you should definitely , be on that maybe by after monday we 'll y you can see what things we are and are n't grad b: . with us ? i would say that tha that those discussions have been primarily , keith and me , like in th the meeting , he i thin like the last meeting we had , we were all very much part of it grad a: sometimes hans has been coming in there as like a devil 's advocate type role , grad a: like " this make , i ' m going to pretend i ' m a linguist who has nothing to do with this . this makes no sense . " and he 'll just go off on parts of it which definitely need fixing but are n't where we 're at right now , so it 's grad b: like like what you call certain things , which we decided long ago we do n't care that much right now . but in a sense , it 's good to know that he of all people , like maybe a lot of people would have m much stronger reactions , so , he 's like a relatively friendly linguist and yet a word like " constraint " causes a lot of problems . and , so . right . so . professor c: ok . this is consistent with the role i had suggested that he play , ok , which was that o one of the things i would like to see happen is a paper that was tentatively called " towards a formal cognitive semantics " which was addressed to these linguists who have n't been following this . so it could be that he 's actually , at some level , thinking about how am i going to communicate this story so , internally , we should just do whatever works , cuz it 's hard enough . but if he g if he turns is really gon na turn around and help t to write this version that does connect with as many as possible of the other linguists in the world then it becomes important to use terminology that does n't make it hard professor c: , it 's gon na be plenty hard for people to understand it as it is , but y you do n't want to make it worse . grad a: no , right . , tha that role is , indispensable but that 's not where our heads were at in these meetings . it was a little strange . professor c: , . no , that 's fine . wanted t to i have to catch up with him , and i wanted t to get a feeling for that . ok . grad a: cuz sometimes he sounds like we 're talking a bunch of goobledy - gook from his point of view . grad b: it 's good when we 're into data and looking at the some specific linguistic phenomenon in english or in german , in particular , whatever , that 's great , and ben and hans are , if anything , more , they have more to say than , let 's say , i would about some of these things . but when it 's like , w how do we capture these things , it 's definitely been keith and i who have d , who have worried more about the professor c: that 's , very close to the maximum number of people working together that can get something done . professor c: , but . but th then w then we have to come back to the bigger group . great . and then we 're gon we 're gon na because of this other big thing we have n't talked about is actually implementing this ? so that i the three of us are gon na connect tomorrow about that . grad b: , we could talk tomorrow . i was just gon na say , though , that , there was , out of a meeting with johno came the suggestion that " , could it be that the meaning constraints really are n't used for selection ? " which has been implicit in the parsing strategy we talked about . in which case we w we can just say that they 're the effects or the bindings . which , so far , in terms of like putting up all the constraints as , pushing them into type constraints , the when i ' ve , propo then proposed it to linguists who have n't yet given me , we have n't yet thought of a reason that would n't work . right ? as long as we allow our type constraints to be reasonably complex . professor c: , it has to in the sense that you 're gon na use them eventu it 's , it 's a , generate and test thing , professor c: and if you over - generate then you 'll have to do more . , if there are some constraints that you hold back and do n't use , in your initial matching then you 'll match some things , i d i do n't think there 's any way that it could completely fail . it it could be that , you wind up the original bad idea of purely context - free grammars died because there were just vastly too many parses . , exponentially num many parses . and so th the concern might be that not that it would fail , but that grad b: that it would still generate too many . right ? so by just having semantic even bringing semantics in for matching just in the form of j semantic types , right ? grad b: like " conceptually these have to be construed as this , and this " might still give us quite a few possibilities that , and and it certainly helps a lot . professor c: no question . and it 's a perfectly fine place to start . , and say , let 's see how far we can go this way . and , professor c: i ' m in favor of that . , cuz i it 's as , it 's real hard and if w if we grad b: so friday , monday . so . ok , that 's tuesday . like th that 's the conclusion . ok . grad b: it 's almost true . , i do n't have it this weekend , so , tsk do n't have to worry about that . grad b: speaking of dance , dance revolution i ca n't believe i ' m it 's a it 's like a game , but it 's for , like , dancing . hard to it 's like karaoke , but for dancing , and they tell you what it 's amazing . it 's so much fun . , it 's so good . my friend has a home version and he brought it over , and we are so into it . it 's so amazing . , y of it ? i i it 's one of your hobbies ? it 's great exercise , i must say . i ca n't to hear this . , definitely . they have , like , places instead of like , instead of karaoke bars now that have , like , ddr , like , i did n't until i started hanging out with this friend , who 's like " , bring over the ddr if you want . " , dance revolution ok . he actually brought a clone called stepping selection , but it 's just as good . anyw
minor technical issues,such as format conversions for xml and javabayes and the full translation of the smartkom generation module in english , are currently being resolved. the voice synthesiser will also be replaced by better technology. an important research issue to be investigated is how the concept of mental spaces and probabilistic relational models can be integrated into the belief-net. mental space interdependencies are based on relatively clean rules , since people seem to manage them easily. a step towards this goal is the construction formalism being put together. this module will eventually have to include ways to simulate questions , do emphasis and focus. the constructions could be built assuming either conventional or conversational implicature. at this stage both routes need to be examined. the formalism will also serve as a starting point for the definition of construal mechanisms. similarly , issues like time plans and discourse stacks are dependent on how the ontology and discourse history are going to be structured and linked. one suggestion was to use the spreading activation as a paradigm for activating nodes in the belief-net. finally , using type constraints in the construction analysis should work , as long as they are complex enough not to generate too many parses. it is necessary to ask the javabayes programmer whether he already has xml conversion programs. for the smartkom generation module , all the syntax-to-prosody rules are going to be re-written for english. additionally , ogi can offer a range of synthesiser voices to choose from. the focus of the next meeting , whose time was rescheduled , will be the discussion of the revised construction formalism. the presentation will unify the existing ideas and help identify the areas in need of further work , such as how it can deal with time and tense use and how they affect inferences in belief-nets. the ambiguity in a "where is x?" construction can be coded in the formalism as a semantic feature or pushed forward to the belief-net where pragmatic features will disambiguate it: in terms of system design , both options need to be investigated at this stage. as the translation of the german smartkom into english moves on , the generation rules may prove difficult to tackle for someone without experience in functional programming , as they are written in lisp. as far as the construction analysis is concerned , the two problems that will need to be solved are to identify the couplings between constructions in different mental spaces and to define how inferences will work in the belief-net from a technical point of view. additionally , in the example "where is x?" construction , the ambiguity ( location or path ) could be coded either in the semantics of the construction or as if determined by context. the former could mean creating a different construction for every slight pragmatic variation. on the other hand , some of the belief-net probabilities could be instantiated in the lexicon. specifying which approach to take when linking the ontology and the discourse history has also proven not to be straightforward. finally , it is still undecided where construal comes in , which would help delimit the constructions as well. several technical matters are being resolved: a conversion program is being written for data to be translated between xml and the java embedded-bayes notation; the language generation templates are now available for the english version of the smartkom system; smartkom now works on three different machines at icsi. on the other hand , future collaboration on belief-nets has already been agreed with another research group. the construction analysis and formalism are also progressing. several issues that have been dealt with were mentioned during the meeting: indefinite pronouns and wh-questions , noun-phrase structure , etc. this analysis is being done with the help of a linguist , who often provides different perspectives to methods and terminology.
###dialogue: grad f: let 's see . so . what ? i ' m supposed to be on channel five ? her . nope . does n't seem to be , grad d: sibilance . three , three . i am three . see , that matches the seat up there . so . grad d: cuz it 's that starts counting from zero and these start counting from one . ergo , the classic off - by - one error . grad b: yes , you ' ve bested me again . that 's how of our continuing interaction . damn ! foiled again ! grad d: so is keith showing up ? he 's talking with george right now . , is he gon na get a rip himself away from that ? grad e: , he was very affirmative in his way of saying he will be here at four . but , that was before he knew about that george lecture probably . professor c: right . this this is not it 's not bad for the project if keith is talking to george . ok . so my suggestion is we just grad e: , i had informal talks with most of you . so , eva just reported she 's really happy about the cbt 's being in the same order in the xml as in the be java declaration format grad e: the , java the embedded bayes wants to take input , a bayes - net in some java notation and eva is using the xalan style sheet processor to convert the xml that 's output by the java bayes for the into the , e bayes input . grad f: actually , maybe i could try , like , emailing the guy and see if he has any something already . that 'd be weird , that he has both the java bayes and the embedded bayes in professor c: he charges so much . right . no , it 's a good idea that you may as ask . grad e: and , pretty mu on t on the top of my list , i would have asked keith how the " where is x ? " hand parse is standing . but we 'll skip that . , there 's good news from johno . the generation templates are done . grad d: so the trees for the xml trees for the gene for the synthesizer are written . so need to do the , write a new set of tree combining rules . but those 'll be pretty similar to the old ones . just gon na be grad e: ok , so natural language generation produces not a just a surface string that is fed into a text - to - speech but , a surface string with a syntax tree that 's fed into a concept - to - speech . grad e: now and this concept - to - speech module has certain rules on how if you get the following syntactic structure , how to map this onto prosodic rules . and fey has foolheartedly to rewrite , the german concept syntax - to - prosody rules grad e: into english . and therefore the , if it 's ok that we give her a couple of more hours per week , then she 'll do that . grad e: no , i my is i asked for a commented version of that file ? if we get that , then it 's doable , even without getting into it , even though the scheme li , is really documented in the festival . grad d: , i if you 're not used to functional programming , scheme can be completely incomprehensible . cuz , there 's no like there 's lots of unnamed functions professor c: anyway , it we 'll sort this out . but anyway , send me the note and then i 'll - i 'll check with , morgan on the money . i do n't anticipate any problem but we have to ask . , so this was { nonvocalsound } , on the generation thing , if sh y she 's really going to do that , then we should be able to get prosody as . so it 'll say it 's nonsense with perfect intonation . grad d: are we gon na can we change the voice of the thing , because right now the voice sounds like a murderer . grad e: it is , we have the choice between the , usual festival voices , which i already told the smartkom people we are n't gon na use because they 're really bad . grad e: ogi has , crafted a couple of diphone type voices that are really and we 're going to use that . we can still , d agree on a gender , if we want . so we still have male or female . grad b: whatever sounds best . unfortunately , probably male voices , a bit more research on . professor c: it turns out there 's the long - standing links with these guys in the speech group . professor c: , there 's this guy who 's got a joint appointment , hynek hermansky . he 's - spends a fair amount of time here . anyway . leave it . wo n't be a problem . grad e: ok . and it 's probably also uninteresting for all of you to , learn that as of twenty minutes ago , david and i , per accident , managed to get the whole smartkom system running on the , icsi linux machines with the icsi nt machines thereby increasing the number of running smartkom systems in this house from one on my laptop to three . grad e: , i suggested to try something that was really even though against better knowledge should n't have worked , but it worked . intuition . grad e: and , we 'll never found out why . it - it 's just like why the generation ma the presentation manager is now working ? grad a: ! this is something you ha you get used to as a programmer , right ? grad e: so , the people at saarbruecken and i decided not to touch it ever again . , that would work . i was gon na ask you where something is and what we know about that . grad e: where is x ? , but by , we can ask , did you get to read all four hundred words ? grad d: i wa i was looking at it . it does n't follow logically . it does n't the first paragraph does n't seem to have any link to the second paragraph . professor c: but c the meeting looks like it 's , it 's gon na be good . so . it 's professor c: , i ra i ran across it in i do n't even know where , some just some weird place . and , , i ' m surprised i did n't know about it professor c: right , or some so but anyway , i so i did see that . wha . before we get started on this st so i also had a email correspondence with daphne kohler , who said yes she would love to work with us on the , , using these structured belief - nets and but starting in august , that she 's also got a new student working on this and that we should get in touch with them again in august and then we 'll figure out a way for you to get connected with , their group . so that 's , looks pretty good . and , i 'll say it now . so , and it looks to me like we 're now at a good point to do something start working on something really hard . we ' ve been so far working on things that are easy . , w which is mental spaces and - or professor c: ? it 's a hard puzzle . but the other part of it is the way they connect to these , probabilistic relational models . so there 's all the problems that the linguists know about , about mental spaces , and the cognitive linguists know about , but then there 's this problem of the belief - net people have only done a moderately good job of dealing with temporal belief - nets . , which they call dynamic they incorrectly call dynamic belief - nets . professor c: so there 's a term " dynamic belief - net " , does n't mean that . it means time slices . and srini used those and people use them . but one of the things i w would like to do over the next , month , it may take more , is to st understand to what extent we can not only figure out the constructions for them for multiple worlds and what the formalism will look like and where the slots and fillers will be , but also what that would translate into in terms of belief - net and the inferences . so the story is that if you have these probabilistic relational models , they 're set up , in principle , so that you can make new instances and instances connect to each other , and all that , so it should be feasible to set them up in such a way that if you ' ve got the past tense and the present tense and each of those is a separate , belief structure that they do their inferences with just the couplings that are appropriate . but that 's g that 's , as far as tell , it 's putting together two real hard problems . one is the linguistic part of what are the couplings and when you have a certain , construction , that implies certain couplings and other couplings , between let 's say between the past and the present , or any other one of these things and then we have this inference problem of exactly technically how does the belief - net work if it 's got , let 's say one in , different tenses or my beliefs and your beliefs , or any of these other ones of multiple models . , in the long run we need to solve both of those and my suggestion is that we start digging into them both , in a way we that , th hopefully turns out to be consistent , so that the and sometimes it 's actually easier to solve two hard problems than one because they constrain each other . if you ' ve got huge ra huge range of possible choices we 'll see . but anyway , so that 's , grad a: , like , i solved the problem of we were talking about how do you various issues of how come a plural noun gets to quote " count as a noun phrase " , occur as an argument of a higher construction , but a bare singular stem does n't get to act that way . , and it would take a really long time to explain it now , but i ' m about to write it up this evening . i solved that at the same time as " how do we keep adjectives from floating to the left of determiners and how do we keep all of that from floating outside the noun phrase " to get something like " i the kicked dog " . did it did it at once . professor c: no , i know , i th that is gon na be the key to this wh to th the big project of the summer of getting the constructions right is that people do manage to do this so there probably are some , relatively clean rules , they 're just not context - free trees . and if we if the formalism is good , then we should be able to have , moderate scale thing . and that is , keith , what i encouraged george to be talking with you about . not the formalism yet professor c: but the phenomena . the p , another thing , there was this , thing that nancy to in a weak moment this morning that professor c: anyway , that we were that we 're gon na try to get a , first cut at the revised formalism by the end of next week . professor c: , just trying to write up essentially what you guys have worked out so that everybody has something to look at . we ' ve talked about it , but only the innermost inner group currently , professor c: that th there 's one of the advantages of a document , right ? , is that it actually transfers from head to head . so anyway . professor c: communication , documentation and . anyway , so , with a little luck l let 's , let 's have that as a goal anyway . grad a: so , what was the date there ? monday or ? it 's a friday . professor c: no , no . no , w we 're talking about a week fr e end of next week . professor c: now if it turns out that effort leads us into some big hole that 's fine . , if you say we 're dump . there 's a really hard problem we have n't solved yet that , that 's just fine . grad a: but at least try and work out what the state of the art is right now . professor c: right , t if to the extent that we have it , let 's write it and to the extent we do n't , let 's find out what we need to do . grad e: can we ? is it worth thinking of an example out of our tourism thing domain , that involves a decent mental space shift or setting up professor c: it is , but i interrupted before keith got to tell us what happened with " where is the powder - tower ? " or whatever grad a: . , what was supposed to happen ? i ' ve been actually caught up in some other ones , so , , i do n't have a write - up of or i have n't elaborated on the ideas that we were already talking about which were grad e: , . i think we already came to the conclusion that we have two alternative paths that we two alternative ways of representing it . one is a has a grad a: it 's gone . the question of whether the polysemy is like in the construction or pragmatic . grad a: it has to be the second case . , so d ' you is it clear what we 're talking about here ? grad a: the question is whether the construction is semantic or like ambiguous between asking for location and asking for path . grad a: but pragmatically that 's construed as meaning " tell me how to get there " . grad e: so assume these are two , nodes we can observe in the bayes - net . so these are either true or false and it 's also just true or false . if we encounter a phrase such as " where is x ? " , should that set this to true and this to true , and the bayes - net figures out which under the c situation in general is more likely ? , or should it just activate this , have this be false , and the bayes - net figures out whether this actually now means ? professor c: ok , so that 's a separate issue . so i a i th i agree with you that , it 's a disaster to try to make separate constructions for every , pragmatic reading , although there are some that will need to be there . professor c: you ca n't do that either . but , c almost certainly " can you pass the salt " is a construction worth noting that there is this th this grad a: so right , this one is maybe in the gray area . is it is it like that or is it just obvious from world knowledge that no one you would n't want to know the location without wanting to know how to get there or whatever . grad e: one or in some cases , it 's quite definitely s so that you just know wanna know where it is . professor c: and i , see , the more important thing at this stage is that we should be able to know how we would handle it in ei f in the short run it 's more important to know how we would treat technically what we would do if we decided a and what we would do if we decided b , than it is t to decide a or b r right now . grad b: which one it is . cuz there will be other k examples that are one way or the other . right . professor c: w we know for that we have to be able to do both . so i in the short run , let 's be real clear on h what the two alternatives would be . grad e: and then the we had another idea floating around , which we wanted to , get your input on , and that concerns the but w we would have a person that would like to work on it , and that 's ir - irina gurevich from eml who is going to be visiting us , the week before , august and a little bit into august . and she would like to apply the ontology that is , being crafted at eml . that 's not the one i sent you . the one i sent you was from gmd , out of a european crumpet . grad e: , and one of the reas one of the those ideas was , so , back to the old johno observation that if y if you have a dialogue history and it said the word " admission fee " was , mentioned , it 's more likely that the person actually wants to enter than just take a picture of it from the outside . now what could imagine to , have a list for each construction of things that one should look up in the discourse history , ? that 's the really stupid way . then there is the really clever way that was suggested by keith and then there is the , middle way that i ' m suggesting and that is you get x , which is whatever , the castle . the ontology will tell us that castles have opening hours , that they have admission fees , they have whatever . and then , this is we go via a thesaurus and look up certain linguistic surface structures that are related to these concepts and feed those through the dialogue history and check dynamically for each e entity . we look it up check whether any of these were mentioned and then activate the corresponding nodes on the discourse side . but keith suggested that a much cleaner way would be is , to keep track of the discourse in such a way that you if that something like that ha has been mentioned before , this just a continues to add up , in th in a grad a: so if someone mentions admission f fees , that activates an enter schema which sticks around for a little while in your rep in the representation of what 's being talked about . and then when someone asks " where is x ? " you ' ve already got the enter schema activated grad d: , is it does n't it seem like if you just managed the dialogue history with a thread , that , kept track of ho of the activity of , cuz it would the thread would nodes like , needed to be activated , so it could just keep track of how long it 's been since something 's been mentioned , and automatically load it in . professor c: you could do that . but here 's a way in th in the bl bayes - net you could think about it this way , that if at the time " admissions fee " was mentioned you could increase the probability that someone wanted to enter . grad d: we - th that 's what i wa i was n't i was i was n't thinking in terms of enter schemas . i was just professor c: fair enough , ok , but , in terms of the c the current implementation right ? so that professor c: th that th the conditional probability that someone so at the time you mentioned it this is this is essentially the bayes - net equivalent of the spreading activation . it 's in some ways it 's not as good but it 's the implementation we got . professor c: we do n't have a connectionist implementation . now my is that it 's not a question of time but it is a question of whether another intervening object has been mentioned . professor c: , we could look at dialo this is the other thing we ha we do is , is we have this data coming which probably will blow all our theories , professor c: but skipping that so but my is what 'll probably will happen , here 's a here 's a proposed design . is that there 're certain constructions which , for our purposes do change the probabilities of eva decisions and various other kinds th that the , standard way that the these contexts work is stack - like or whatever , but that 's the most recent thing . and so it could be that when another , en tourist entity gets mentioned , you professor c: re essentially re - initiali , re - i essentially re - initialize the state . and i if we had a fancier one with multiple worlds you could have , you could keep track of what someone was saying about this and that . , " i wanna go in the morning grad a: here 's my plan for today . bed014cdialogueact644 157354 157383 c professor s^e%-:s%- -1 0 i wanna here 's my plan for tomorrow . " professor c: in the afternoon to the powder - tower , tal so i ' m talking about shopping and then you say , , " what 's it cost ? " . so one could imagine , but not yet . but i do th think that the it 'll turn out that it 's gon na be depend on whether there 's been an override . grad e: , if you ask " how much does a train ride and cinema around the vineyards cost ? " and then somebody tells you it 's sixty dollars and then you say " ok how much is , i would like to visit the " whatever , something completely different , " then i go to , point reyes " , it 's not more likely that you want to enter anything , but it 's , a complete rejection of entering by doing that . grad b: so when you admit have admission fee and it changes something , it 's only for that particular it 's relational , right ? it 's only for that particular object . professor c: , i th , and the simple idea is that it 's on it 's only for m for the current , tourist e entity of instre interest . grad e: . but that 's this function , so , has the current object been mentioned in with a question about concerning its professor c: no , no . it 's it it goes the other d it goes in the other direction . is when th when the this is mentioned , the probability of , let 's say , entering changes grad d: you could just hav , just , ob it it observes an er , it sets the a node for " entered " or " true " , professor c: now , but ro - robert 's right , that to determine that , ok ? you may want to go through a th thesaurus professor c: so , if the issue is , if so now th this construction has been matched and you say " ok . does this actually have any implications for our decisions ? " then there 's another piece of code that presumably does that computation . professor c: . but but what 's robert 's saying is , and he 's right , is you do n't want to try to build into the construction itself all the synonyms and all , all the wo maybe . i 'll have to think about that . i . it th thi think of arguments in either direction on that . but somehow you want to do it . grad e: - . , it 's just another , construction side is how to get at the possible inferences we can draw from the discourse history or changing of the probabilities , and - or grad b: it 's like i g the other thing is , whether you have a m user model that has , whatever , a current plan , whatever , plans that had been discussed , and i , grad d: what , what 's the argument for putting it in the construction ? is it just that the s synonym selection is better , or ? professor c: , wel , the ar the the argument is that you 're gon na have the if you ' ve recognized the word , which means you have a lexical construction for it , so you could just as tag the lexical construction with the fact that it 's a , thirty percent increase in probability of entering . you so you could invert the whole thing , so you s you tag that information on to the lexicon professor c: since you had to recognize it anyway . that that 's the argument in the other direction . at , and this is grad e: even though the lexical construction itself out of context , wo n't do it . , y you have to keep track whether the person says but i but i ' m not interested in the opening times is a more a v type . grad e: so . but , we 'll , we have time to this is a s just a sidetrack , but it 's also something that people have not done before , is , abuse an ontology for these kinds of , inferences , on whether anything relevant to the current something has been , has crept up in the dialogue history already , or not . and , i have the , if we wanted to have that function in the dialogue hi dialogue module of smartkom , i have the written consent of jan to put it in there . grad e: yes , . that 's , i ' m keeping on good terms with jan . professor c: you ' ve noticed that . so , it 's very likely that robert 's thesis is going to be along these lines , professor c: and the local rules are if it 's your thesis , you get to decide how it 's done . ok . so if , if this is , if this becomes part of your thesis , you can say , hey we 're gon na do it this way , that 's the way it 's done . grad b: yay , it 's not me . it 's always me when it 's someone 's thesis . professor c: no , no ! no , no . we ' ve got a lot we ' ve got a lot of theses going . grad e: , let 's talk after friday the twenty - ninth . then we 'll see how f professor c: right . so h he 's got a th he 's got a meet meeting in germany with his thesis advisor . professor c: , right . so , that 's the other thing . , this is , speaking of hard problems , this is a very good time , to start trying to make explicit where construal comes in and , where c where the construction per - se ends and where construal comes in , professor c: right . so . right . so thing that 's part of why we want the formalism , is because th it is gon na have implicit in it grad d: , but it he the decisions i made wer had to do with my thesis . so consequently do n't i get to decide then that it 's robert 's job ? grad b: , i 'll just pick a piece of the problem and then just push the hard into the center and say it 's robert 's . like . grad e: i ' ve always been completely in favor of consensus decisions , so we 'll find a way . grad e: it it might even be interesting then to say that i should be forced to , pull some of the ideas that have been floating in my head out of the , out of the top hat professor c: ri - no . so , wh you had you ha you had done one draft . professor c: i this is i ' m shocked . this is the first time i ' ve seen a thesis proposal change . right . anyway , . so . professor c: but , a second that would be great . so , a sec you 're gon na need it anyway . grad e: , and i would like to d discuss it and , get you guys 's input and make it bomb - proof . professor c: so that , so th thi this , so this is the point , is we 're going to have to cycle through this , but th the draft of the p proposal on the constructions is going to tell us a lot about what we think needs to be done by construal . and , we oughta be doing it . grad e: , we need some then we need to make some dates . meeting regular meeting time for the summer , we really have n't found one . we did thursdays one for a while . talked to ami . it 's - it 's a coincidence that he ca n't do could n't do it today here . professor c: , you were n't here , but s , and so , if that 's ok with you , grad e: mmm . and , . how do we feel about doing it wednesdays ? because it seems to me that this is a time where when we have things to discuss with other people , there they seem to be s tons of people around . professor c: those people who might not be around so much . , i do n't care . i have no fixed grad a: to tell you the truth , i 'd rath i 'd , i 'd would like to avoid more than one icsi meeting per day , if possible . but . i . whatever . grad e: , if one thing is , this room is taken at after three - thirty pr every day by the data collection . so we have subjects anyway except for this week , we have subjects in here . that 's why it was one . so we just knew i grad e: no , he can . so let 's say thursday one . but for next week , this is a bit late . so i would suggest that we need to talk grad b: could we do thursday at one - thirty ? would that be horrible ? really ? grad b: , ok . you did n't tell me that . ok , that 's fine . grad e: w , actually we w we did scrap our monday time just because bhaskara could n't come monday . grad d: although you wanted to go camping on monday er , take off mondays a lot so you could go camping . grad e: get a fresh start , that 's another s thing . but , . , there are also usually then holidays anyways . like sometimes it works out that way . grad b: , the linguists ' meeting i happens to be at two , but that 's . grad a: right ? so . and , nancy and i are just always talking anyway and sometimes we do it in that room . so , . grad e: ok , so l forget about the b the camping thing . so let 's , any other problems w ? but , i suggested monday . if that 's a problem for me then i should n't suggest it . professor c: earlier we at least for next week , there 's a lot of we want to get done , so why do n't we plan to meet monday and we 'll see if we want to meet any more than that . professor c: here i ' m blissfully agreeing to things and realizing that i actually do have some scheduled on monday . grad b: y you 'll come and take all the headph the good headphones first and then remind me . grad b: fine . yes . would you like to ? ok . i was actually gon na work on it for tomorrow like this weekend . grad e: i wo i would like i would get a notion of what you guys have in store for me . professor c: m @ , w maybe mond - maybe we can put this is part of what we can do monday , if we want . grad b: , so there was like , m in my head the goal to have like an intermediate version , like , everything i know . and then , w i would talk to you and figure out everything , that , see if they 're consistent . grad a: why do n't w maybe you and i should meet more or less first thing monday morning and then we can work on this . grad b: that 's fine . so we might continue our email thing and that might be fine , too . so , maybe i 'll send you some grad a: , if you have time after this i 'll show you the noun phrase thing . grad e: so the idea is on monday at two we 'll see an intermediate version of the formalism for the constructions , grad e: so it wo n't be , like , a for semi - formal presentation of my proposal . it 'll be more like towards finalizing that proposal . grad a: someday we also have to we should probably talk about the other side of the " where is x " construction , which is the issue of , how do you simulate questions ? what does the simspec look like for a question ? because it 's a little different . grad a: we had to we had an idea for this which seemed like it would probably work . professor c: great . ok . simspec may need we may n need to re - name that . professor c: ok ? so let 's think of a name for whatever the this intermediate structure is . , we talked about semspec , for " semantic spec specification " grad b: all the old like graphs , just change the just , like , mark out the professor c: anyway , so let 's for the moment call it that until we think of something better . and , we need to find part of what was missing were markings of all sorts that were n't in there , incl including the questions we did n't we never did figure out how we were gon na do emphasis in , the semspec . grad b: , we ' ve talked a little bit about that , too , which , it 's hard for me to figure out with our general linguistic issues , how they map onto this particular one , but ok , understood . professor c: but that 's part of the formalism is got to be , how things like that get marked . grad b: w do you have data , like the you have preliminary data ? cuz i know , we ' ve been using this one easy sentence and i ' m you guys have , maybe you are the one who ' ve been looking at the rest of it grad b: it 'd be useful for me , if we want to have it a little bit more data oriented . grad a: to tell you the truth , what i ' ve been looking at has not been the data so far , grad a: said " alright let 's see if get noun phrases and , major verb co , constructions out of the way first . " and i have not gotten them out of the way yet . surprise . so , i have not really approached a lot of the data , but like these the question one , since we have this idea about the indefinite pronoun thing and all that , i ca can try and , run with that , try and do some of the sentence constructions now . it would make sense . grad e: so mary fixed the car with a wrench . so you perform the mental sum and then , " who fixed the car with a wrench ? " you are told , to do this in the in analogously to the way you would do " someone fixed the car with a wrench " . and then you hand it back to your hippocampus and find out what that , grad a: the wh question has this as extra thing which says " and when you 're done , tell me who fills that slot " or w . and , this is a way to do it , the idea of saying that you treat from the simulation point of view or whatever you treat , wh constructions similarly to , indefinite pronouns like " someone fixed the car " because lots of languages , have wh questions with an indefinite pronoun in situ or whatever , grad a: and you just get intonation to tell you that it 's a question . so it makes sense grad a: it makes sense from that point of view , too , which is actually better . grad a: anyway , but just that thing and we 'll figure out exactly how to write that up and so on , but , no , all the focus . we just dropped that cuz it was too weird and we did n't even know , like , what we were talking about exactly , what the object of study was . professor c: . , if , i part of what the exercise is , t by the end of next week , is to say what are the things that we just do n't have answers for yet . that 's fine . grad e: , if you do wanna discuss focus background and then get me into that because , i wo i w scientifically worked on that for almost two years . grad b: , you should definitely , be on that maybe by after monday we 'll y you can see what things we are and are n't grad b: . with us ? i would say that tha that those discussions have been primarily , keith and me , like in th the meeting , he i thin like the last meeting we had , we were all very much part of it grad a: sometimes hans has been coming in there as like a devil 's advocate type role , grad a: like " this make , i ' m going to pretend i ' m a linguist who has nothing to do with this . this makes no sense . " and he 'll just go off on parts of it which definitely need fixing but are n't where we 're at right now , so it 's grad b: like like what you call certain things , which we decided long ago we do n't care that much right now . but in a sense , it 's good to know that he of all people , like maybe a lot of people would have m much stronger reactions , so , he 's like a relatively friendly linguist and yet a word like " constraint " causes a lot of problems . and , so . right . so . professor c: ok . this is consistent with the role i had suggested that he play , ok , which was that o one of the things i would like to see happen is a paper that was tentatively called " towards a formal cognitive semantics " which was addressed to these linguists who have n't been following this . so it could be that he 's actually , at some level , thinking about how am i going to communicate this story so , internally , we should just do whatever works , cuz it 's hard enough . but if he g if he turns is really gon na turn around and help t to write this version that does connect with as many as possible of the other linguists in the world then it becomes important to use terminology that does n't make it hard professor c: , it 's gon na be plenty hard for people to understand it as it is , but y you do n't want to make it worse . grad a: no , right . , tha that role is , indispensable but that 's not where our heads were at in these meetings . it was a little strange . professor c: , . no , that 's fine . wanted t to i have to catch up with him , and i wanted t to get a feeling for that . ok . grad a: cuz sometimes he sounds like we 're talking a bunch of goobledy - gook from his point of view . grad b: it 's good when we 're into data and looking at the some specific linguistic phenomenon in english or in german , in particular , whatever , that 's great , and ben and hans are , if anything , more , they have more to say than , let 's say , i would about some of these things . but when it 's like , w how do we capture these things , it 's definitely been keith and i who have d , who have worried more about the professor c: that 's , very close to the maximum number of people working together that can get something done . professor c: , but . but th then w then we have to come back to the bigger group . great . and then we 're gon we 're gon na because of this other big thing we have n't talked about is actually implementing this ? so that i the three of us are gon na connect tomorrow about that . grad b: , we could talk tomorrow . i was just gon na say , though , that , there was , out of a meeting with johno came the suggestion that " , could it be that the meaning constraints really are n't used for selection ? " which has been implicit in the parsing strategy we talked about . in which case we w we can just say that they 're the effects or the bindings . which , so far , in terms of like putting up all the constraints as , pushing them into type constraints , the when i ' ve , propo then proposed it to linguists who have n't yet given me , we have n't yet thought of a reason that would n't work . right ? as long as we allow our type constraints to be reasonably complex . professor c: , it has to in the sense that you 're gon na use them eventu it 's , it 's a , generate and test thing , professor c: and if you over - generate then you 'll have to do more . , if there are some constraints that you hold back and do n't use , in your initial matching then you 'll match some things , i d i do n't think there 's any way that it could completely fail . it it could be that , you wind up the original bad idea of purely context - free grammars died because there were just vastly too many parses . , exponentially num many parses . and so th the concern might be that not that it would fail , but that grad b: that it would still generate too many . right ? so by just having semantic even bringing semantics in for matching just in the form of j semantic types , right ? grad b: like " conceptually these have to be construed as this , and this " might still give us quite a few possibilities that , and and it certainly helps a lot . professor c: no question . and it 's a perfectly fine place to start . , and say , let 's see how far we can go this way . and , professor c: i ' m in favor of that . , cuz i it 's as , it 's real hard and if w if we grad b: so friday , monday . so . ok , that 's tuesday . like th that 's the conclusion . ok . grad b: it 's almost true . , i do n't have it this weekend , so , tsk do n't have to worry about that . grad b: speaking of dance , dance revolution i ca n't believe i ' m it 's a it 's like a game , but it 's for , like , dancing . hard to it 's like karaoke , but for dancing , and they tell you what it 's amazing . it 's so much fun . , it 's so good . my friend has a home version and he brought it over , and we are so into it . it 's so amazing . , y of it ? i i it 's one of your hobbies ? it 's great exercise , i must say . i ca n't to hear this . , definitely . they have , like , places instead of like , instead of karaoke bars now that have , like , ddr , like , i did n't until i started hanging out with this friend , who 's like " , bring over the ddr if you want . " , dance revolution ok . he actually brought a clone called stepping selection , but it 's just as good . anyw ###summary: minor technical issues,such as format conversions for xml and javabayes and the full translation of the smartkom generation module in english , are currently being resolved. the voice synthesiser will also be replaced by better technology. an important research issue to be investigated is how the concept of mental spaces and probabilistic relational models can be integrated into the belief-net. mental space interdependencies are based on relatively clean rules , since people seem to manage them easily. a step towards this goal is the construction formalism being put together. this module will eventually have to include ways to simulate questions , do emphasis and focus. the constructions could be built assuming either conventional or conversational implicature. at this stage both routes need to be examined. the formalism will also serve as a starting point for the definition of construal mechanisms. similarly , issues like time plans and discourse stacks are dependent on how the ontology and discourse history are going to be structured and linked. one suggestion was to use the spreading activation as a paradigm for activating nodes in the belief-net. finally , using type constraints in the construction analysis should work , as long as they are complex enough not to generate too many parses. it is necessary to ask the javabayes programmer whether he already has xml conversion programs. for the smartkom generation module , all the syntax-to-prosody rules are going to be re-written for english. additionally , ogi can offer a range of synthesiser voices to choose from. the focus of the next meeting , whose time was rescheduled , will be the discussion of the revised construction formalism. the presentation will unify the existing ideas and help identify the areas in need of further work , such as how it can deal with time and tense use and how they affect inferences in belief-nets. the ambiguity in a "where is x?" construction can be coded in the formalism as a semantic feature or pushed forward to the belief-net where pragmatic features will disambiguate it: in terms of system design , both options need to be investigated at this stage. as the translation of the german smartkom into english moves on , the generation rules may prove difficult to tackle for someone without experience in functional programming , as they are written in lisp. as far as the construction analysis is concerned , the two problems that will need to be solved are to identify the couplings between constructions in different mental spaces and to define how inferences will work in the belief-net from a technical point of view. additionally , in the example "where is x?" construction , the ambiguity ( location or path ) could be coded either in the semantics of the construction or as if determined by context. the former could mean creating a different construction for every slight pragmatic variation. on the other hand , some of the belief-net probabilities could be instantiated in the lexicon. specifying which approach to take when linking the ontology and the discourse history has also proven not to be straightforward. finally , it is still undecided where construal comes in , which would help delimit the constructions as well. several technical matters are being resolved: a conversion program is being written for data to be translated between xml and the java embedded-bayes notation; the language generation templates are now available for the english version of the smartkom system; smartkom now works on three different machines at icsi. on the other hand , future collaboration on belief-nets has already been agreed with another research group. the construction analysis and formalism are also progressing. several issues that have been dealt with were mentioned during the meeting: indefinite pronouns and wh-questions , noun-phrase structure , etc. this analysis is being done with the help of a linguist , who often provides different perspectives to methods and terminology.
3
professor d: , let 's get started . hopefully nancy will come , if not , she wo n't . grad b: , robert , do you have any way to turn off your screensaver on there so that it 's not going off every , it seems to have about at two minute grad c: and i told it to stay on forever and ever , but if it 's not plugged in it just does n't obey my commands . it has a mind . grad c: but we 'll just be m working on it at intensity so it does n't happen . we 'll see . should we plunge right into it ? so , would you like to grad c: so what i ' ve tried to do here is list all the decision nodes that we have identified on this side . commented and what they 're about and the properties we may give them . and here are the tasks to be implemented via our data collection . so all of these tasks the reading is out of these tasks more or less imply that the user wants to go there , sometime or the other . and analogously , here we have our eva intention . and these are the data tasks where w we can assume the person would like to enter , view or just approach the thing . analogously the same on the object information we can see that , we have created these tasks before we came up with our decision nodes so there 's a lot of things where we have no analogous tasks , and that may or may not be a problem . we can change the tasks slightly if we feel that we should have data for e for every decision node so trying to i m implant the intention of going to a place now , going to a place later on the same tour , or trying to plant the intention of going sometime on the next tour , or the next day or whenever . professor d: so . so let me pop up a level . and s make that we 're all oriented the same . so what we 're gon na do today is two related things . one of them is to work on the semantics of the belief - net which is going to be the main inference engine for thi the system making decisions . and decisions are going to turn out to be parameter choices for calls on other modules . so f the natural language understanding thing is , we think gon na only have to choose parameters , but , a fairly large set of parameters . so to do that , we need to do two things . one of which is figure out what all the choices are , which we ' ve done a fair amount . then we need to figure out what influences its choices and finally we have to do some technical work on the actual belief relations and presumably estimates of the probabilities and . but we are n't gon na do the probability today . technical we 'll do another day . probably next week . but we are gon na worry about all the decisions and the things that pert that contribute to them . and we 're also , in the same process , going to work with fey on what there should be in the dialogues . so one of the s steps that 's coming up real soon is to actually get subjects in here , and have them actually record like this . record dialogues more or less . and depending on what fey provokes them to say , we 'll get information on different things . professor d: and so for , keith and people worrying about what constructions people use , we have some i we have some ways to affect that the dialogues go . so what robert kindly did , is to lay out a table of the kinds of things that might come up , and , the kinds of decisions . so the on the left are decision nodes , and discreet values . so if we 're right , you can get by with just this middle column worth of decisions , and it 's not all that many , and it 's perfectly feasible technically to build belief - nets that will do that . and he has a handout . grad c: maybe it was too fast plunging in there , because j we have two updates . you can look at this if you want , these are what our subject 's going to have to fill out . any comments still be made and the changes will be put in correspondingly . grad c: let me summarize in two sentences , mainly for eva 's benefit , who probably has not heard about the data collection , . or have you heard about it ? grad c: no . ok . we were gon na put this in front of people . they give us some information on themselves . then then they will read a task where lots of german words are thrown in between . and and they have to read isolated proper names and these change grad c: no , this is not the release form . this is the speaker information form . grad c: and then they gon na have to f choose from one of these tasks , which are listed here . they they pick a couple , say three six . six different things they think they would do if they were in heidelberg or traveling someplace and they have a map . like this . very sketchy , simplified map . and they can take notes on that map . and then they call this computer system that works perfectly , and understands everything . and grad c: the comp , the computer system sits right in front of you , that 's fey . grad c: and she has a way of making this machine talk . so she can copy sentences into a window , or type really fast and this machine will use speech synthesis to produce that . so if you ask " how do i get to the castle " then a m s several seconds later it 'll come out of here " in order to get to the castle you do " ok ? and then after three tasks the system breaks down . and fey comes on the phone as a human operator . and says " the system broke down but let 's continue . " and we get the idea what people do when they s think they speak to a machine and what people say when they think they speak to a human , or know , or assume they speak to a human . grad c: that 's the data collection . and fey has some thirty subjects lined up ? something ? and they 're r ready to roll . grad c: and we 're still l looking for a room on the sixth floor because they stole away that conference room . behind our backs . but professor d: , there are these , i see , we have to , it 's tricky . we 'll let 's let we 'll do that off - line , ok . grad c: , but i i it 's happening . david and jane and lila are working on that as we speak . that was the data collection in a nutshell . and report a so i did this but i also tried to do this so if i click on here , is n't this wonderful ? we get to the belief - net just focusing on the g go - there node . analogously this would be the reason node and the timing node and . and what w what happened is that design - wise i 'd n noticed that we can we still get a lot of errors from a lot of points to one of these sub go - there user go - there situation nodes . so i came up with a couple of additional nodes here where whether the user is thrifty or not , and what his budget is currently like , is going to result in some financial state of the user . how much will he is he willing to spend ? or can spend . being the same at this just the money available , which may influence us , whether he wants to go there if it is charging tons of dollars for admission or its gon na g cost a lot of t e whatever . twenty - two million to fly to international space station , . just not all people can do that . grad c: so , and this actually turned out to be pretty key , because having specified these this intermediate level and noticing that everything that happens here let 's go to our favorite endpoint one is again more or less we have then the situation nodes contributing to the endpoint situation node , which contributes to the endpoint and . now draw straight lines from these to here , meaning it g goes where the sub - s everything that comes from situation , everything that comes from user goes with the sub - u , and whatever we specify for the so - called " keith node " , or the discourse , what comes from the parser , construction parser , will contribute to the d and the ontology to the sub - o node . and one just s has to watch which also final decision node so it does n't make sense t to figure out whether he wants to enter , view or approach an object if he never wants to go there in the first place . but this makes the design thing fairly simple . and now all w that 's left to do then is the cpg 's , the conditional probabilities , for the likelihood of a person having enough money , actually wanting to go a place if it costs , this or that . and ok . and once bhaskara has finished his classwork that 's where we 're gon na end up doing . you get involved in that process too . and for now the question is " how much of these decisions do we want to build in explicitly into our data collection ? " so , one could think of we could call the z see or , people who visit the zoo we could s call it " visit the zoo tomorrow " , so we have an intention of seeing something , but not now but later . professor d: , so let 's s see i th that from one point of view , , all these places are the same , so that d that , in terms of the linguistics and , there may be a few different kinds of places , so i th i it seems to me that we ought to decide , what things are k are actually going to matter to us . and , so the zoo , and the university and the castle , et cetera . are all big - ish things that have different parts to them , and one of them might be fine . grad c: , . the the reason why we did it that way , as a reminder , is no person is gon na do all of them . grad c: , i usually visit zoos , or i usually visit castles , or i usually and then you pick that one . professor d: right , no , but s th point is to y to build a system that 's got everything in it that might happen you do one thing . professor d: t to build a system that had the most data on a relatively confined set of things , you do something else . and the speech people , are gon na do better if they if things come up repeatedly . now , if everybody says exactly the same thing then it 's not interesting . so , all i ' m saying is i th there 's a question of what we 're trying t to accomplish . and my temptation for the data gathering would be to , and each person is only gon na do it once , so you do n't have to worry about them being bored , so if it 's one service , one luxury item , one big - ish place , and so on , then my is that the data is going to be easier to handle . now you have this i possible danger that somehow there 're certain constructions that people use when talking about a museum that they would n't talk about with a university and , but i ' m i m my temptation is to go for simpler . , less variation . but i what other people think about this in terms of grad b: so i do n't exactly understand like i we 're trying to limit the detail of our ontology or types of places that someone could go , right ? but who is it that has to care about this , or what component of the system ? professor d: , th there are two places where it comes up . one is in the th these people who are gon na take this and try to do speech with it . lots of pronunciations of th of the same thing are going to give you better data than l , a few pronunciations of lots more things . that 's one . grad b: so we would rather just ask have a bunch of people talk about the zoo , and assume that will that the constructions that they use there will give us everything we need to know about these zoo , castle , whatever type things , these bigger places . grad b: and that way you get the speech data of people saying " zoo " over and over again or whatever too . professor d: so this is a question for you , and , if we do , and we probably will , actually try to build a prototype , probably we could get by with the prototype only handling a few of them anyway . so , grad c: , the this was these are all different activities . but y i got the point and i like it . we can do put them in a more hierarchical fashion . so , " go to place " and then give them a choice , either they 're the symphony type or opera type or the tourist site guide type or the nightclub disco type person and they say " this is on that " go to big - ish place " , this is what i would do . " and then we have the " fix " thing , and then maybe " do something the other day " thing , so . my question is i , to some extent , we should y we just have to try it out and see if it works . it would be challenging , in a sense , to try to make it so complex that they even really should schedule , or to plan it , a more complex thing in terms of ok , they should get the feeling that there are these s six things they have to do and they sh can be done maybe in two days . so they make these decisions , professor d: , it 's easy enough to set that up if that 's your expectation . so , the system could say , " , we 'd like to set up your program for two days in heidelberg , let 's first think about all the things you might like to do . so there th i in i th i ' m that if that 's what you did then they would start telling you about that , and then you could get into various things about ordering , if you wanted . grad c: - . , but this is part of the instructor 's job . and that can be done , to say , " ok now we ' ve picked these six tasks . " now you have you can call the system and you have two days . professor d: no , we have to help we have to decide . fey will p carry out whatever we decide . but we have to decide , what is the appropriate scenario . that 's what we 're gon na talk about t . phd f: but these are two different scenarios entirely . , one is a planner the other , it give you instructions on the spot grad c: , but th the i do n't i ' m not really interested in " phase planning " capabilities . but it 's more the how do people phrase these planning requests ? so are we gon na masquerade the system as this as you said simple response system , i have one question i get one response , or should we allow for a certain level of complexity . and a i w think the data would be nicer if we get temporal references . grad b: , it seems that , off the top of my head it kinda seems like you would probably just want , richer data , more complex going on , people trying to do more complex sets of things . , if our goal is to really be able to handle a whole bunch of different , then throwing harder situations at people will get them to do more linguistic more interesting linguistic . but i ' m not really , because i do n't fully understand like what our choices are of ways to do this here yet . grad c: w we have tested this and a y have you heard listen to the f first two or th the second person is was faced with exactly this setup . grad b: i started to listen to one and it was just like , , depressing . i 'd just listen to the beginning part and the person was just reading off her script . and . grad c: , it is already with this it got pretty with this setup and that particular subject it got pretty complex . grad c: maybe i suggest we make some fine tuning of these , get run through ten or so subjects and then take a breather , and see whether we wanna make it more complex or not , depending on what results we 're getting . grad b: right . it , i am just today , next couple days gon na start really diving into this data . i ' ve looked at one of the files one of these l y you gave me those dozens of files and i looked at one of them which was about ten sentences , found fifteen , twenty different construction types that we would have to look for and so on and like , " alright , let 's start here . " . so i have n't really gone into the , looked of the that 's going on . so i do n't really right , once i start doing that i 'll have more to say about this thing . professor d: but th but you did say something important , which is that you can probably keep yourself fairly occupied with the simple cases for quite a while . although , th so that sa s does suggest that , now , i have looked the data , and it 's pre it 's actually at least to an amateur , quite redundant . professor d: that that it was very stylized , and quite a lot of people said more or less the same thing . grad b: i did scan it at first and noticed that , and then looked in detail at one of them . but , i noticed that , too . grad c: and with this we 're getting more . no question . w do we wanna get going beyond more , which is the professor d: , ok , so let 's take let 's i your suggestion is good , which is we 'll do a b a batch . and , fey , how long is it gon na be till you have ten subjects ? couple days ? or thr f a a week ? or i do n't have a feel for th professor d: , it 's up to you , i j i e we do n't have any huge time pressure . it 's just when you have t professor d: , ok . so let 's do this . let 's plan next monday , ok , to have a review of what we have so far . professor d: no , we wo n't have the transcriptions , but what we should be able to do and i if , fey , if you will have time to do this , but it would be great if you could , not transcribe it all , but pick out , some . we could lis just sit here and listen to it all . are you gon na have the audio on the web site ? grad c: until we reach the gigabyte thing and david johnson s ki kills me . and we 're gon na put it on the web site . professor d: , we could get , you can buy another disk for two hundred dollars , right ? it 's not like so , we 'll take care of david johnson . professor d: alright . so we 'll buy a disk . but anyway , so , if you if you can think of a way to , point us to th to interesting things , as you 're doing this , make your make notes that this is , something worth looking at . and other than that , i we 'll just have to , listen although i it 's only ten minutes each , roughly . undergrad e: , i . i ' m not how long it 's actually going to take . grad c: the reading task is a lot shorter . that was cut by fifty percent . and the reading , nobody 's interested in that except for the speech people . grad c: it feels like forever when you 're doing it , but then it turns out to be three minutes and forty five seconds . professor d: could be . i was thinking people would , hesitate and whatever . whatever it is we 'll deal with it . professor d: ok , so that 'll be on the web page . that 's great . but anyway , so it 's a good idea to start with the relatively straight forward res just response system . and then if we want to get them to start doing multiple step planning with a whole bunch of things and then organize them an tell them which things are near each other , any of that . , " which things would you like to do tuesday morning ? " so i th that seems pretty straight forward . undergrad e: that w maybe one thing we should do is go through this list and select things that are categories and then o offer only one member of that category ? undergrad e: and then , they could be alternate versions of the same if you wanted data on different constructions . undergrad e: like one person gets the version with the zoo as a choice , and the other person gets the grad c: no , th the per the person do n't get it . , this is why we did it , because when we gave them just three tasks for w part - a and three tasks for part - b a undergrad e: no , they could still choose . they just would n't be able to choose both zoo and say , touring the castle . grad c: exactly . this is limiting the choices , but . right . ok , . but this approach will very work , but the person was able to look at it and say " ok , this is what i would actually do . " grad c: ok , we got ta disallow traveling to zoos and castles at the same time , grad c: and , these are just places where you enter , much like here . but we can professor d: , if y if you use the right verb for each in common , like at , " attend a theater , symphony or opera " is a group , and " tour the university , castle or zoo " , all of these d do have this " tour " aspect about the way you would go to them . and , the movie theater is probably also e is a " attend " et cetera . professor d: and then , what one would expect is that the sentence types would their responses would tend to be grouped according to the activity , you would expect . phd f: but i it seem that there is a difference between going to see something , and things like " exchange money " or " dine out " @ function , . grad c: , this is where th the function is definitely different and the getting information or g . but this is open . so since people gon na still pick something , we 're not gon na get any significant amount of redundancy . and for reasons , we do n't want it , really , in that sense . and we would be ultimately more interested in getting all the possible ways of people asking , for different things with or with a computer . and so if you can think of any other high level tasks a tourist may do just always just m mail them to us and we 'll sneak them into the collection . we 're not gon na do much statistical with it . grad c: but it seems like since we are getting towards subject fifty subjects and if we can keep it up to a five four - ish per week rate , we may even reach the one hundred before fey t takes off to chicago . professor d: , these are all f people off campus s from campus so far , so we how many we can get next door at the shelter . for ten bucks , probably quite a few . professor d: so , alright , so let 's go back then , to the chart with all the decisions and , and see how we 're doing . do do people think that , this is gon na cover what we need , or should we be thinking about more ? grad c: , in terms of decision nodes ? , go - there is a yes or no . right ? i ' m also interested in th in this " property " line here , so if you look at , look at that , timing was i have these three . do we need a final differentiation there ? now , later on the same tour , sometimes on the next tour . grad c: it 's next day , so you 're doing something now and you have planned to do these three four things , and you can do something immediately , you could tag it on to that tour grad c: or you can say this is something i would do s i wanna do sometime l in my life , . grad b: so so this tour is just like th the idea of current s round of touristness or whatever , professor d: , probably between stops back at the hotel . if you wanted precise about it , and that 's the way tourists do organize their lives . , " ok , we 'll go back to the hotel and then we 'll go off and " grad c: , my visit to prague there were some nights where i never went back to the hotel , so whether that counts as a two - day tour or not we 'll have to think . grad c: i . what is the english co cognate if you want , for " sankt nimmerlandstag " ? grad c: " we 'll do it on when you say on that d day it means it 'll never happen . do you have an expression ? probably you sh grad c: , when hell , we 'll do it when hell freezes over . so maybe that should be another property in there . grad c: , the reason why do we go there in the first place ie it 's either for sightseeing , for meeting people , for running errands , or doing business . entertainment is a good one in there , . i agree . grad b: so , business is supposed to , be it like professional type , right , like that ? grad c: this w this is an old johno thing . he had it in there . who is the tour is the person ? so it might be a tourist , it might be a business man who 's using the system , who wants to go to some grad b: , like my father is about to travel to prague . he 'll be there for two weeks . he is going to he 's there to teach a course at the business school but he also is touring around and so he may have some mixture of these things . phd f: what ab what do you have in mind in terms of socializing ? what activities ? grad c: , just meeting people , . i want to meet someone somewhere , which be puts a very heavy constraint on the " eva " , because then if you 're meeting somebody at the town hall , you 're not entering it usually , you 're just want to approach it . grad b: so , does this capture , like , where do you put exchange money is an errand , but what about grad b: so , like " go to a movie " is now entertainment , dine out is professor d: but i would say that if " dine out " is a special c if you 're doing it for that purpose then it 's entertainment . and we 'll also as y as you 'll s further along we 'll get into business about " , you 're this is going over a meal time , do you wanna stop for a meal or pick up food ? " and that 's different . that 's that 's part of th that 's not a destination reason , that 's " en passant , " right . grad c: endpoint is pretty clear . , " mode " , i have found three , drive there , " walk there " or " be driven " , which means bus , taxi , bart . professor d: taxis are very different than buses , but on the other hand the system does n't have any public transport this the planner system does n't have any public transport in it yet . grad c: so this granularity would suffice , w if we say the person probably , based on the utterance we on the situation we can conclude wants to drive there , walk there , or use some other form of transportation . grad b: h how much of heidelberg can you get around by public transport ? in terms of the interesting bits . there 's lots of bits where you do n't really i ' ve only ev was there ten years ago , for a day , so i do n't remember , but . , like the tourist - y bits professor d: you ca n't get to the philosophers ' way very , but , there are hikes that you ca n't get to , but other things you can , if i remember right . grad c: , we actually biking should be a separate point because we have a very strong bicycle planning component . grad c: bicycles c should be in there , but , will we have bic is this realistic ? grad b: i would lump it with " walk " because hills matter . right ? things like that . grad c: ok , " length " is , you wanna get this over with as fast as possible , you wanna use some part of what of the time you have . , they can . but we should just make a decision whether we feel that they want to use some substantial or some fraction of their time . grad c: , they wanna do it so badly that they are willing to spend the necessary and plus time . and y , if we feel that they wanna do nothing but that thing then , we should point out that to the planner , that they probably want to use all the time they have . so , stretch out that visit for that . grad b: it seems like this would be really hard to . , on the part of the system . it seems like it you 're talking about rather than having the user decide this you 're supposed t we 're supposed to figure it out ? grad c: th - the user can always s say it , but it 's just we hand over these parameters if we make if we have a feeling that they are important . grad c: and that we can actually infer them to a significant de degree , or we ask . professor d: and par , and part of the system design is that if it looks to be important and you ca n't figure it out , then you ask . but hopefully you do n't ask , a all these things all the time . or so , y but there 's th but definitely a back - off position to asking . grad c: and if no part of the system ever comes up with the idea that this could be important , no planner is ever gon na ask for it . y so and i like the idea that , jerry pushed this idea from the very beginning , that it 's part of the understanding business to make a good question of what 's important in this general picture , what you need t if you wanna simulate it , what parameters would you need for the simulation ? and , timing , length would definitely be part of it , costs , little money , some money , lots of money ? actually , maybe f so , f phd f: i must say that thi this one looks a bit strange to me . maybe it seems like appropriate if i go to las vegas . but i decide k how much money i ' m willing to lose . but a i as a tourist , i 'll just paying what 's more or less is required . professor d: , no . there are there 're different things where you have a ch choice , professor d: , this t interacts with " do am i do are you willing to take a taxi ? " professor d: or , if you 're going to the opera are you gon na l look for the best seats or the peanut gallery professor d: whatever ? s so there are a variety of things in which tour - tourists really do have different styles eating . another one , grad c: the what my sentiment is they 're , i once had to write a charter , a carter for a student organization . and they had wanted me to define what the quorum is going to be . and i looked at the other ones and they always said ten percent of the student body has to be present at their general meeting otherwise it 's not a and i wrote in there " en - enough " people have to be there . and it was hotly debated , but people with me that everybody probably has a good feeling whether it was a farce , a joke , or whether there were enough people . and if you go to turkey , you will find when people go shopping , they will say " how much cheese do you want ? " and they say " , enough . " and the and the this used all over the place . because the person selling the cheese knows , that person has two kids and , a husband that dislikes cheese , so this is enough . and so the middle part is always the golden way , right ? so you can s you can be really make it as cheap as possible , or you can say " i want , er , i do n't care " grad c: money is no object , or you say " want to spend enough " . or the sufficient , or the appropriate amount . but , then again , this may turn out to be insufficient for our purposes . but , this is my first , grad c: in much the same way as how d should the route be ? should it be the easiest route , even if it 's a b little bit longer ? no steep inclinations ? go the normal way ? whatever that again means , er or do you does the person wanna rough it ? grad b: th so there 's a couple of different ways you can interpret these things " i want to go there and i do n't care if it 's really hard . " or if you 're an extreme sport person , . i wanna go there and i insist on it being the hard way . , so i assume we 're going for the first interpretation , something like i 'll go th i 'd li it 's different from thing to grad c: , this is all , top of my head . no no research behind that . " object information " , do i do i wanna know anything about that object ? is either true or false . and . if i care about it being open , accessible or not , i do n't think there 's any middle ground there . , either i wanna know where it is or not , i wanna know about it 's history or not , or , i wanna know about what it 's good for or not . maybe one could put scales in there , too . so i wanna know a l lot about it . professor d: , now ob ok , i ' m , go ahead , what were you gon na say ? grad c: one could put scales in there . so i wanna know a lot about the history , just a bit . professor d: , right y i w if we w right . so " object " becomes " entity " , professor d: and we think that 's it , interestingly enough , that , th or very close to it is going to be enough . professor d: alright , so so the order of things is that , robert will clean this up a little bit , although it looks pretty good . professor d: , so so , in parallel , three things are going to happen . robert and eva and bhaskara are gon na actually build a belief - net that , has cpt 's and , tries to infer this from various kinds of information . and fey is going to start collecting data , and we 're gon na start thinking a about what constructions we want to elicit . and then w go it may iterate on , further data collection to elicit grad b: d do you mean eliciting particular constructions ? or do you mean like what kinds of things we want to get people talking about ? semantically speaking , ? professor d: , yes . both . , and though for us , constructions are primarily semantic , and so grad b: from my point of view i ' m trying to care about the syntax , so professor d: that too , but if th if we in if we , make that we get them talking about temporal order . ok , that would be great and if th if they use prepositional phrases or subordinate clauses or whatever , professor d: w , whatever form they use is fine . but that probably we 're gon na try to look at it as , s what semantic constructions d do we want them to do direc , " caused motion " , i , something like that . but , - this is actually a conversation you and i have to have about your thesis fantasies , and how all this fits into that . grad c: , i will tell you the german tourist data . because i have not been able to dig out all the out of the m ta thirty d v if you grad b: is that roughly the equivalent of what i ' ve seen in english or is it grad b: same ok , that . like what what have i got now ? i have what i ' m loo what i those files that you sent me are the user side of some interaction with fey ? grad c: some data i collected in a couple weeks for training recognizers and email way back when . grad c: nothing to write home about . and the see this ontology node is probably something that i will try to expand . once we have the full ontology api , what can we expect to get from the ontology ? and hopefully you can also try to find out , sooner or later in the course of the summer what we can expect to get from the discourse that might , or the not the discourse , the utterance as it were , in terms of professor d: but that 's a he 's g he 's hoping to do this for his masters ' thesis s by a year from now . professor d: limited . , the idea is , the hope is that the parser itself is , pretty robust . but it 's not popular it 's only p only grad b: sometime , i have to talk to some subset of the people in this group , at least about what constructions i ' m looking for . , like just again , looking at this one thing , i saw y things from as general as argument structure constructions . , i have to do verb phrase . i have to do unbounded dependencies , which have a variety of constructions in instantiate that . on the other hand i have to have , there 's particular , fixed expressions , or semi - fixed expressions like " get " plus path expression for , " how d ho how do i get there ? " , how do i get in ? , how do i get away ? and all that . , so there 's a variety of different sorts of constructions and it 's like anything goes . like professor d: ok , so this is we 're gon na mainly work on with george . ok , and hi let me f th say what is so the idea is first of all i misspoke when i said we thought you should do the constructions . for a linguist that means to do completely and perfectly . so what i , ok , so what i meant was " do a first cut at " . professor d: ok , because we do wanna get them r u perfectly but we 're gon na have to do a first cut at a lot of them to see how they interact . grad b: right , exactly . now it w we talked about this before , right . and i me it would be completely out of the question to really do more than , say , like , i , ten , over the summer , but we need to get a general view of what things look like , so professor d: so the idea is going to be to do like nancy did in some of the er these papers where you do enough of them so you can go from top to bottom so you can do f , f have a complete story ov of s of some piece of dialogue . and that 's gon na be much more useful than having all of the clausal constructions and nothing else , or like that . so that the trick is going to be t to take this and pick a some lattice of constructions , so some lexical and some phrasal , and , whatever you need in order to , be able to then , by hand , explain , some fraction of the utterances . and so , exactly which ones will partly depend on your research interests and a bunch of other things . grad b: but in terms of the s th level of analysis , these do n't necessarily have to be more complex than like the " out of " construction in the bcp paper where it 's just like , half a page on each one . professor d: correct . v a half a page is what we 'd like . and if there 's something that really requires a lot more than that then it does and we have to do it , grad c: we could sit down and think of the ideal speaker utterances , and two or three that follow each other , so , where we can also , once we have everything up and running , show the tremendous , insane inferencing capabilities of our system . so , as the smartkom people have . this is their standard demo dialogue , which is , what the system survives and nothing but that . , we could also sor have the analogen of o our sample sentences , the ideal sentences where we have complete construction coverage and , they match nicely . so the " how do i get to x ? " , that 's definitely gon na be , a major one . grad c: where is x ? might be another one which is not too complicated . and " tell me something about x . " and hey , that 's already covering eighty percent of the system 's functionality . professor d: ye - right , but it 's not covering eighty percent of the intellectual interest . grad c: no , we can w throw in an " out of film " construction if you want to , but professor d: the th there 's a lot that needs to be done to get this right . ok , i th we done ? grad c: , the action planner guy has wrote has written a p lengthy proposal on how he wants to do the action planning . and i responded to him , also rather lengthy , how he should do the action planning . grad c: yes . and i tacked on a little paragraph about the fact that the whole world calls that module a dis disc dialogue manager , and would n't it make sense to do this here too ? and also rainer m malaka is going to be visiting us shortly , most likely in the beginning of june . grad c: he - he 's just in a conference somewhere and he is just swinging through town . and m making me incapable of going to naacl , for which i had funding . but . no , no pittsburg this year . when is the santa barbara ? grad c: who is going to ? should a lot of people . that 's something i will would enjoy . professor d: , probably we can pay for it . a student rate should n't be very high . so , if we all decide it 's a good idea for you to go then you 'll we 'll pay for it . professor d: i do n't have a feeling one way or the other at the moment , but it probably is . ok , great .
the main focus of the meeting was firstly on the structure of the belief-net , its decision nodes and the parameters that influence them , and secondly , on the design of the data collection tasks. for the latter , there are already 30 subjects lined up and more are expected to be recruited off campus. it was agreed that making subjects select from categories of tasks , such as "big place" , "service" , etc . could provide a better range of data. the duration of each dialogue will probably be no more than 10 minutes. on the other hand , the organisation of the intermediate nodes of the belief-net and their properties is almost complete , although no conditional probabilities have been inserted yet. these nodes represent decisions that will function as parameters to action calls in the system. their values will either be inferred from the user-system interaction , or -as a last resort- requested directly from the user. finally , as to the semantic and syntactic constructions , work will start with more general and brief descriptions , before moving to exhaustive analysis of at least a subset. similarly , the construction parser that is to be built within a year is expected to be relatively basic , yet robust. as the data collection is ready to start , it was agreed that for the first ten subjects the interaction with the system/instructor will be along the lines of a basic response system. tasks will be divided in categories ( "tour" , "attend" etc ) and subjects are going to be asked to choose no more than one task out of each category . this first run will probably take a couple of weeks , but the first results ( audio files and selected highlights ) will be discussed shortly , in order to decide whether more detail ( complex spatial relationships , temporal planning etc ) should be included in the design or particular constructions be elicited. regarding the completion of the belief-net , the remaining details , mainly the properties of the ontology and discourse nodes , should be added. after building in the conditional probability tables , a working prototype of the net will be ready. finally , the initial work on constructions should focus on a general overview of the dialogues with brief descriptions. further analysis will follow from there in a top-down fashion. although there is an effort to include some of the key features of the belief-net in the design of the data gathering , not all of them can be built in. the tasks that the subjects will have to carry out will be categorised in ways that will indicate eva intentions , however , this approach may limit the variety of possible constructions used within a single category of entities. on the other hand , generating more diverse dialogues may have an adverse effect from a speech recognition perspective. a minor problem has arisen with the laboratory where recordings are supposed to take place , but this is currently being sorted out. as regards the completion of the belief-net , no work has been done on the cpt's yet. finally , it was noted that although a general overview of the pertinent constructions is attainable , no more than ten of them can be analysed in detail with the summer months. a detailed diagram of the eva belief-net was presented and some of the intermediate nodes and their properties were discussed in depth. some of the key features and properties are: "go-there" , which is binary , and defined by the user , situation , ontology and discourse models; "timing" ( current/next tour ); "reason" ( business , sight-seeing , socialising ); "transport"; "length of tour"; "costs"; "entity" ( open , accessible ) etc. the data collection that will provide relevant dialogues is moving along , with thirty subjects already lined up. they will be given a reading task , which will include some german proper names , and a series of tasks from the tourist domain to choose from. in order to get directions , they will then communicate with a computer system and a human operator , using a sketchy map as an aid. a different set of data are already available from the smartkom system and similar sources. a preliminary study using this data has shown that a large number of syntactic and semantic constructions can be derived from a small sample.
###dialogue: professor d: , let 's get started . hopefully nancy will come , if not , she wo n't . grad b: , robert , do you have any way to turn off your screensaver on there so that it 's not going off every , it seems to have about at two minute grad c: and i told it to stay on forever and ever , but if it 's not plugged in it just does n't obey my commands . it has a mind . grad c: but we 'll just be m working on it at intensity so it does n't happen . we 'll see . should we plunge right into it ? so , would you like to grad c: so what i ' ve tried to do here is list all the decision nodes that we have identified on this side . commented and what they 're about and the properties we may give them . and here are the tasks to be implemented via our data collection . so all of these tasks the reading is out of these tasks more or less imply that the user wants to go there , sometime or the other . and analogously , here we have our eva intention . and these are the data tasks where w we can assume the person would like to enter , view or just approach the thing . analogously the same on the object information we can see that , we have created these tasks before we came up with our decision nodes so there 's a lot of things where we have no analogous tasks , and that may or may not be a problem . we can change the tasks slightly if we feel that we should have data for e for every decision node so trying to i m implant the intention of going to a place now , going to a place later on the same tour , or trying to plant the intention of going sometime on the next tour , or the next day or whenever . professor d: so . so let me pop up a level . and s make that we 're all oriented the same . so what we 're gon na do today is two related things . one of them is to work on the semantics of the belief - net which is going to be the main inference engine for thi the system making decisions . and decisions are going to turn out to be parameter choices for calls on other modules . so f the natural language understanding thing is , we think gon na only have to choose parameters , but , a fairly large set of parameters . so to do that , we need to do two things . one of which is figure out what all the choices are , which we ' ve done a fair amount . then we need to figure out what influences its choices and finally we have to do some technical work on the actual belief relations and presumably estimates of the probabilities and . but we are n't gon na do the probability today . technical we 'll do another day . probably next week . but we are gon na worry about all the decisions and the things that pert that contribute to them . and we 're also , in the same process , going to work with fey on what there should be in the dialogues . so one of the s steps that 's coming up real soon is to actually get subjects in here , and have them actually record like this . record dialogues more or less . and depending on what fey provokes them to say , we 'll get information on different things . professor d: and so for , keith and people worrying about what constructions people use , we have some i we have some ways to affect that the dialogues go . so what robert kindly did , is to lay out a table of the kinds of things that might come up , and , the kinds of decisions . so the on the left are decision nodes , and discreet values . so if we 're right , you can get by with just this middle column worth of decisions , and it 's not all that many , and it 's perfectly feasible technically to build belief - nets that will do that . and he has a handout . grad c: maybe it was too fast plunging in there , because j we have two updates . you can look at this if you want , these are what our subject 's going to have to fill out . any comments still be made and the changes will be put in correspondingly . grad c: let me summarize in two sentences , mainly for eva 's benefit , who probably has not heard about the data collection , . or have you heard about it ? grad c: no . ok . we were gon na put this in front of people . they give us some information on themselves . then then they will read a task where lots of german words are thrown in between . and and they have to read isolated proper names and these change grad c: no , this is not the release form . this is the speaker information form . grad c: and then they gon na have to f choose from one of these tasks , which are listed here . they they pick a couple , say three six . six different things they think they would do if they were in heidelberg or traveling someplace and they have a map . like this . very sketchy , simplified map . and they can take notes on that map . and then they call this computer system that works perfectly , and understands everything . and grad c: the comp , the computer system sits right in front of you , that 's fey . grad c: and she has a way of making this machine talk . so she can copy sentences into a window , or type really fast and this machine will use speech synthesis to produce that . so if you ask " how do i get to the castle " then a m s several seconds later it 'll come out of here " in order to get to the castle you do " ok ? and then after three tasks the system breaks down . and fey comes on the phone as a human operator . and says " the system broke down but let 's continue . " and we get the idea what people do when they s think they speak to a machine and what people say when they think they speak to a human , or know , or assume they speak to a human . grad c: that 's the data collection . and fey has some thirty subjects lined up ? something ? and they 're r ready to roll . grad c: and we 're still l looking for a room on the sixth floor because they stole away that conference room . behind our backs . but professor d: , there are these , i see , we have to , it 's tricky . we 'll let 's let we 'll do that off - line , ok . grad c: , but i i it 's happening . david and jane and lila are working on that as we speak . that was the data collection in a nutshell . and report a so i did this but i also tried to do this so if i click on here , is n't this wonderful ? we get to the belief - net just focusing on the g go - there node . analogously this would be the reason node and the timing node and . and what w what happened is that design - wise i 'd n noticed that we can we still get a lot of errors from a lot of points to one of these sub go - there user go - there situation nodes . so i came up with a couple of additional nodes here where whether the user is thrifty or not , and what his budget is currently like , is going to result in some financial state of the user . how much will he is he willing to spend ? or can spend . being the same at this just the money available , which may influence us , whether he wants to go there if it is charging tons of dollars for admission or its gon na g cost a lot of t e whatever . twenty - two million to fly to international space station , . just not all people can do that . grad c: so , and this actually turned out to be pretty key , because having specified these this intermediate level and noticing that everything that happens here let 's go to our favorite endpoint one is again more or less we have then the situation nodes contributing to the endpoint situation node , which contributes to the endpoint and . now draw straight lines from these to here , meaning it g goes where the sub - s everything that comes from situation , everything that comes from user goes with the sub - u , and whatever we specify for the so - called " keith node " , or the discourse , what comes from the parser , construction parser , will contribute to the d and the ontology to the sub - o node . and one just s has to watch which also final decision node so it does n't make sense t to figure out whether he wants to enter , view or approach an object if he never wants to go there in the first place . but this makes the design thing fairly simple . and now all w that 's left to do then is the cpg 's , the conditional probabilities , for the likelihood of a person having enough money , actually wanting to go a place if it costs , this or that . and ok . and once bhaskara has finished his classwork that 's where we 're gon na end up doing . you get involved in that process too . and for now the question is " how much of these decisions do we want to build in explicitly into our data collection ? " so , one could think of we could call the z see or , people who visit the zoo we could s call it " visit the zoo tomorrow " , so we have an intention of seeing something , but not now but later . professor d: , so let 's s see i th that from one point of view , , all these places are the same , so that d that , in terms of the linguistics and , there may be a few different kinds of places , so i th i it seems to me that we ought to decide , what things are k are actually going to matter to us . and , so the zoo , and the university and the castle , et cetera . are all big - ish things that have different parts to them , and one of them might be fine . grad c: , . the the reason why we did it that way , as a reminder , is no person is gon na do all of them . grad c: , i usually visit zoos , or i usually visit castles , or i usually and then you pick that one . professor d: right , no , but s th point is to y to build a system that 's got everything in it that might happen you do one thing . professor d: t to build a system that had the most data on a relatively confined set of things , you do something else . and the speech people , are gon na do better if they if things come up repeatedly . now , if everybody says exactly the same thing then it 's not interesting . so , all i ' m saying is i th there 's a question of what we 're trying t to accomplish . and my temptation for the data gathering would be to , and each person is only gon na do it once , so you do n't have to worry about them being bored , so if it 's one service , one luxury item , one big - ish place , and so on , then my is that the data is going to be easier to handle . now you have this i possible danger that somehow there 're certain constructions that people use when talking about a museum that they would n't talk about with a university and , but i ' m i m my temptation is to go for simpler . , less variation . but i what other people think about this in terms of grad b: so i do n't exactly understand like i we 're trying to limit the detail of our ontology or types of places that someone could go , right ? but who is it that has to care about this , or what component of the system ? professor d: , th there are two places where it comes up . one is in the th these people who are gon na take this and try to do speech with it . lots of pronunciations of th of the same thing are going to give you better data than l , a few pronunciations of lots more things . that 's one . grad b: so we would rather just ask have a bunch of people talk about the zoo , and assume that will that the constructions that they use there will give us everything we need to know about these zoo , castle , whatever type things , these bigger places . grad b: and that way you get the speech data of people saying " zoo " over and over again or whatever too . professor d: so this is a question for you , and , if we do , and we probably will , actually try to build a prototype , probably we could get by with the prototype only handling a few of them anyway . so , grad c: , the this was these are all different activities . but y i got the point and i like it . we can do put them in a more hierarchical fashion . so , " go to place " and then give them a choice , either they 're the symphony type or opera type or the tourist site guide type or the nightclub disco type person and they say " this is on that " go to big - ish place " , this is what i would do . " and then we have the " fix " thing , and then maybe " do something the other day " thing , so . my question is i , to some extent , we should y we just have to try it out and see if it works . it would be challenging , in a sense , to try to make it so complex that they even really should schedule , or to plan it , a more complex thing in terms of ok , they should get the feeling that there are these s six things they have to do and they sh can be done maybe in two days . so they make these decisions , professor d: , it 's easy enough to set that up if that 's your expectation . so , the system could say , " , we 'd like to set up your program for two days in heidelberg , let 's first think about all the things you might like to do . so there th i in i th i ' m that if that 's what you did then they would start telling you about that , and then you could get into various things about ordering , if you wanted . grad c: - . , but this is part of the instructor 's job . and that can be done , to say , " ok now we ' ve picked these six tasks . " now you have you can call the system and you have two days . professor d: no , we have to help we have to decide . fey will p carry out whatever we decide . but we have to decide , what is the appropriate scenario . that 's what we 're gon na talk about t . phd f: but these are two different scenarios entirely . , one is a planner the other , it give you instructions on the spot grad c: , but th the i do n't i ' m not really interested in " phase planning " capabilities . but it 's more the how do people phrase these planning requests ? so are we gon na masquerade the system as this as you said simple response system , i have one question i get one response , or should we allow for a certain level of complexity . and a i w think the data would be nicer if we get temporal references . grad b: , it seems that , off the top of my head it kinda seems like you would probably just want , richer data , more complex going on , people trying to do more complex sets of things . , if our goal is to really be able to handle a whole bunch of different , then throwing harder situations at people will get them to do more linguistic more interesting linguistic . but i ' m not really , because i do n't fully understand like what our choices are of ways to do this here yet . grad c: w we have tested this and a y have you heard listen to the f first two or th the second person is was faced with exactly this setup . grad b: i started to listen to one and it was just like , , depressing . i 'd just listen to the beginning part and the person was just reading off her script . and . grad c: , it is already with this it got pretty with this setup and that particular subject it got pretty complex . grad c: maybe i suggest we make some fine tuning of these , get run through ten or so subjects and then take a breather , and see whether we wanna make it more complex or not , depending on what results we 're getting . grad b: right . it , i am just today , next couple days gon na start really diving into this data . i ' ve looked at one of the files one of these l y you gave me those dozens of files and i looked at one of them which was about ten sentences , found fifteen , twenty different construction types that we would have to look for and so on and like , " alright , let 's start here . " . so i have n't really gone into the , looked of the that 's going on . so i do n't really right , once i start doing that i 'll have more to say about this thing . professor d: but th but you did say something important , which is that you can probably keep yourself fairly occupied with the simple cases for quite a while . although , th so that sa s does suggest that , now , i have looked the data , and it 's pre it 's actually at least to an amateur , quite redundant . professor d: that that it was very stylized , and quite a lot of people said more or less the same thing . grad b: i did scan it at first and noticed that , and then looked in detail at one of them . but , i noticed that , too . grad c: and with this we 're getting more . no question . w do we wanna get going beyond more , which is the professor d: , ok , so let 's take let 's i your suggestion is good , which is we 'll do a b a batch . and , fey , how long is it gon na be till you have ten subjects ? couple days ? or thr f a a week ? or i do n't have a feel for th professor d: , it 's up to you , i j i e we do n't have any huge time pressure . it 's just when you have t professor d: , ok . so let 's do this . let 's plan next monday , ok , to have a review of what we have so far . professor d: no , we wo n't have the transcriptions , but what we should be able to do and i if , fey , if you will have time to do this , but it would be great if you could , not transcribe it all , but pick out , some . we could lis just sit here and listen to it all . are you gon na have the audio on the web site ? grad c: until we reach the gigabyte thing and david johnson s ki kills me . and we 're gon na put it on the web site . professor d: , we could get , you can buy another disk for two hundred dollars , right ? it 's not like so , we 'll take care of david johnson . professor d: alright . so we 'll buy a disk . but anyway , so , if you if you can think of a way to , point us to th to interesting things , as you 're doing this , make your make notes that this is , something worth looking at . and other than that , i we 'll just have to , listen although i it 's only ten minutes each , roughly . undergrad e: , i . i ' m not how long it 's actually going to take . grad c: the reading task is a lot shorter . that was cut by fifty percent . and the reading , nobody 's interested in that except for the speech people . grad c: it feels like forever when you 're doing it , but then it turns out to be three minutes and forty five seconds . professor d: could be . i was thinking people would , hesitate and whatever . whatever it is we 'll deal with it . professor d: ok , so that 'll be on the web page . that 's great . but anyway , so it 's a good idea to start with the relatively straight forward res just response system . and then if we want to get them to start doing multiple step planning with a whole bunch of things and then organize them an tell them which things are near each other , any of that . , " which things would you like to do tuesday morning ? " so i th that seems pretty straight forward . undergrad e: that w maybe one thing we should do is go through this list and select things that are categories and then o offer only one member of that category ? undergrad e: and then , they could be alternate versions of the same if you wanted data on different constructions . undergrad e: like one person gets the version with the zoo as a choice , and the other person gets the grad c: no , th the per the person do n't get it . , this is why we did it , because when we gave them just three tasks for w part - a and three tasks for part - b a undergrad e: no , they could still choose . they just would n't be able to choose both zoo and say , touring the castle . grad c: exactly . this is limiting the choices , but . right . ok , . but this approach will very work , but the person was able to look at it and say " ok , this is what i would actually do . " grad c: ok , we got ta disallow traveling to zoos and castles at the same time , grad c: and , these are just places where you enter , much like here . but we can professor d: , if y if you use the right verb for each in common , like at , " attend a theater , symphony or opera " is a group , and " tour the university , castle or zoo " , all of these d do have this " tour " aspect about the way you would go to them . and , the movie theater is probably also e is a " attend " et cetera . professor d: and then , what one would expect is that the sentence types would their responses would tend to be grouped according to the activity , you would expect . phd f: but i it seem that there is a difference between going to see something , and things like " exchange money " or " dine out " @ function , . grad c: , this is where th the function is definitely different and the getting information or g . but this is open . so since people gon na still pick something , we 're not gon na get any significant amount of redundancy . and for reasons , we do n't want it , really , in that sense . and we would be ultimately more interested in getting all the possible ways of people asking , for different things with or with a computer . and so if you can think of any other high level tasks a tourist may do just always just m mail them to us and we 'll sneak them into the collection . we 're not gon na do much statistical with it . grad c: but it seems like since we are getting towards subject fifty subjects and if we can keep it up to a five four - ish per week rate , we may even reach the one hundred before fey t takes off to chicago . professor d: , these are all f people off campus s from campus so far , so we how many we can get next door at the shelter . for ten bucks , probably quite a few . professor d: so , alright , so let 's go back then , to the chart with all the decisions and , and see how we 're doing . do do people think that , this is gon na cover what we need , or should we be thinking about more ? grad c: , in terms of decision nodes ? , go - there is a yes or no . right ? i ' m also interested in th in this " property " line here , so if you look at , look at that , timing was i have these three . do we need a final differentiation there ? now , later on the same tour , sometimes on the next tour . grad c: it 's next day , so you 're doing something now and you have planned to do these three four things , and you can do something immediately , you could tag it on to that tour grad c: or you can say this is something i would do s i wanna do sometime l in my life , . grad b: so so this tour is just like th the idea of current s round of touristness or whatever , professor d: , probably between stops back at the hotel . if you wanted precise about it , and that 's the way tourists do organize their lives . , " ok , we 'll go back to the hotel and then we 'll go off and " grad c: , my visit to prague there were some nights where i never went back to the hotel , so whether that counts as a two - day tour or not we 'll have to think . grad c: i . what is the english co cognate if you want , for " sankt nimmerlandstag " ? grad c: " we 'll do it on when you say on that d day it means it 'll never happen . do you have an expression ? probably you sh grad c: , when hell , we 'll do it when hell freezes over . so maybe that should be another property in there . grad c: , the reason why do we go there in the first place ie it 's either for sightseeing , for meeting people , for running errands , or doing business . entertainment is a good one in there , . i agree . grad b: so , business is supposed to , be it like professional type , right , like that ? grad c: this w this is an old johno thing . he had it in there . who is the tour is the person ? so it might be a tourist , it might be a business man who 's using the system , who wants to go to some grad b: , like my father is about to travel to prague . he 'll be there for two weeks . he is going to he 's there to teach a course at the business school but he also is touring around and so he may have some mixture of these things . phd f: what ab what do you have in mind in terms of socializing ? what activities ? grad c: , just meeting people , . i want to meet someone somewhere , which be puts a very heavy constraint on the " eva " , because then if you 're meeting somebody at the town hall , you 're not entering it usually , you 're just want to approach it . grad b: so , does this capture , like , where do you put exchange money is an errand , but what about grad b: so , like " go to a movie " is now entertainment , dine out is professor d: but i would say that if " dine out " is a special c if you 're doing it for that purpose then it 's entertainment . and we 'll also as y as you 'll s further along we 'll get into business about " , you 're this is going over a meal time , do you wanna stop for a meal or pick up food ? " and that 's different . that 's that 's part of th that 's not a destination reason , that 's " en passant , " right . grad c: endpoint is pretty clear . , " mode " , i have found three , drive there , " walk there " or " be driven " , which means bus , taxi , bart . professor d: taxis are very different than buses , but on the other hand the system does n't have any public transport this the planner system does n't have any public transport in it yet . grad c: so this granularity would suffice , w if we say the person probably , based on the utterance we on the situation we can conclude wants to drive there , walk there , or use some other form of transportation . grad b: h how much of heidelberg can you get around by public transport ? in terms of the interesting bits . there 's lots of bits where you do n't really i ' ve only ev was there ten years ago , for a day , so i do n't remember , but . , like the tourist - y bits professor d: you ca n't get to the philosophers ' way very , but , there are hikes that you ca n't get to , but other things you can , if i remember right . grad c: , we actually biking should be a separate point because we have a very strong bicycle planning component . grad c: bicycles c should be in there , but , will we have bic is this realistic ? grad b: i would lump it with " walk " because hills matter . right ? things like that . grad c: ok , " length " is , you wanna get this over with as fast as possible , you wanna use some part of what of the time you have . , they can . but we should just make a decision whether we feel that they want to use some substantial or some fraction of their time . grad c: , they wanna do it so badly that they are willing to spend the necessary and plus time . and y , if we feel that they wanna do nothing but that thing then , we should point out that to the planner , that they probably want to use all the time they have . so , stretch out that visit for that . grad b: it seems like this would be really hard to . , on the part of the system . it seems like it you 're talking about rather than having the user decide this you 're supposed t we 're supposed to figure it out ? grad c: th - the user can always s say it , but it 's just we hand over these parameters if we make if we have a feeling that they are important . grad c: and that we can actually infer them to a significant de degree , or we ask . professor d: and par , and part of the system design is that if it looks to be important and you ca n't figure it out , then you ask . but hopefully you do n't ask , a all these things all the time . or so , y but there 's th but definitely a back - off position to asking . grad c: and if no part of the system ever comes up with the idea that this could be important , no planner is ever gon na ask for it . y so and i like the idea that , jerry pushed this idea from the very beginning , that it 's part of the understanding business to make a good question of what 's important in this general picture , what you need t if you wanna simulate it , what parameters would you need for the simulation ? and , timing , length would definitely be part of it , costs , little money , some money , lots of money ? actually , maybe f so , f phd f: i must say that thi this one looks a bit strange to me . maybe it seems like appropriate if i go to las vegas . but i decide k how much money i ' m willing to lose . but a i as a tourist , i 'll just paying what 's more or less is required . professor d: , no . there are there 're different things where you have a ch choice , professor d: , this t interacts with " do am i do are you willing to take a taxi ? " professor d: or , if you 're going to the opera are you gon na l look for the best seats or the peanut gallery professor d: whatever ? s so there are a variety of things in which tour - tourists really do have different styles eating . another one , grad c: the what my sentiment is they 're , i once had to write a charter , a carter for a student organization . and they had wanted me to define what the quorum is going to be . and i looked at the other ones and they always said ten percent of the student body has to be present at their general meeting otherwise it 's not a and i wrote in there " en - enough " people have to be there . and it was hotly debated , but people with me that everybody probably has a good feeling whether it was a farce , a joke , or whether there were enough people . and if you go to turkey , you will find when people go shopping , they will say " how much cheese do you want ? " and they say " , enough . " and the and the this used all over the place . because the person selling the cheese knows , that person has two kids and , a husband that dislikes cheese , so this is enough . and so the middle part is always the golden way , right ? so you can s you can be really make it as cheap as possible , or you can say " i want , er , i do n't care " grad c: money is no object , or you say " want to spend enough " . or the sufficient , or the appropriate amount . but , then again , this may turn out to be insufficient for our purposes . but , this is my first , grad c: in much the same way as how d should the route be ? should it be the easiest route , even if it 's a b little bit longer ? no steep inclinations ? go the normal way ? whatever that again means , er or do you does the person wanna rough it ? grad b: th so there 's a couple of different ways you can interpret these things " i want to go there and i do n't care if it 's really hard . " or if you 're an extreme sport person , . i wanna go there and i insist on it being the hard way . , so i assume we 're going for the first interpretation , something like i 'll go th i 'd li it 's different from thing to grad c: , this is all , top of my head . no no research behind that . " object information " , do i do i wanna know anything about that object ? is either true or false . and . if i care about it being open , accessible or not , i do n't think there 's any middle ground there . , either i wanna know where it is or not , i wanna know about it 's history or not , or , i wanna know about what it 's good for or not . maybe one could put scales in there , too . so i wanna know a l lot about it . professor d: , now ob ok , i ' m , go ahead , what were you gon na say ? grad c: one could put scales in there . so i wanna know a lot about the history , just a bit . professor d: , right y i w if we w right . so " object " becomes " entity " , professor d: and we think that 's it , interestingly enough , that , th or very close to it is going to be enough . professor d: alright , so so the order of things is that , robert will clean this up a little bit , although it looks pretty good . professor d: , so so , in parallel , three things are going to happen . robert and eva and bhaskara are gon na actually build a belief - net that , has cpt 's and , tries to infer this from various kinds of information . and fey is going to start collecting data , and we 're gon na start thinking a about what constructions we want to elicit . and then w go it may iterate on , further data collection to elicit grad b: d do you mean eliciting particular constructions ? or do you mean like what kinds of things we want to get people talking about ? semantically speaking , ? professor d: , yes . both . , and though for us , constructions are primarily semantic , and so grad b: from my point of view i ' m trying to care about the syntax , so professor d: that too , but if th if we in if we , make that we get them talking about temporal order . ok , that would be great and if th if they use prepositional phrases or subordinate clauses or whatever , professor d: w , whatever form they use is fine . but that probably we 're gon na try to look at it as , s what semantic constructions d do we want them to do direc , " caused motion " , i , something like that . but , - this is actually a conversation you and i have to have about your thesis fantasies , and how all this fits into that . grad c: , i will tell you the german tourist data . because i have not been able to dig out all the out of the m ta thirty d v if you grad b: is that roughly the equivalent of what i ' ve seen in english or is it grad b: same ok , that . like what what have i got now ? i have what i ' m loo what i those files that you sent me are the user side of some interaction with fey ? grad c: some data i collected in a couple weeks for training recognizers and email way back when . grad c: nothing to write home about . and the see this ontology node is probably something that i will try to expand . once we have the full ontology api , what can we expect to get from the ontology ? and hopefully you can also try to find out , sooner or later in the course of the summer what we can expect to get from the discourse that might , or the not the discourse , the utterance as it were , in terms of professor d: but that 's a he 's g he 's hoping to do this for his masters ' thesis s by a year from now . professor d: limited . , the idea is , the hope is that the parser itself is , pretty robust . but it 's not popular it 's only p only grad b: sometime , i have to talk to some subset of the people in this group , at least about what constructions i ' m looking for . , like just again , looking at this one thing , i saw y things from as general as argument structure constructions . , i have to do verb phrase . i have to do unbounded dependencies , which have a variety of constructions in instantiate that . on the other hand i have to have , there 's particular , fixed expressions , or semi - fixed expressions like " get " plus path expression for , " how d ho how do i get there ? " , how do i get in ? , how do i get away ? and all that . , so there 's a variety of different sorts of constructions and it 's like anything goes . like professor d: ok , so this is we 're gon na mainly work on with george . ok , and hi let me f th say what is so the idea is first of all i misspoke when i said we thought you should do the constructions . for a linguist that means to do completely and perfectly . so what i , ok , so what i meant was " do a first cut at " . professor d: ok , because we do wanna get them r u perfectly but we 're gon na have to do a first cut at a lot of them to see how they interact . grad b: right , exactly . now it w we talked about this before , right . and i me it would be completely out of the question to really do more than , say , like , i , ten , over the summer , but we need to get a general view of what things look like , so professor d: so the idea is going to be to do like nancy did in some of the er these papers where you do enough of them so you can go from top to bottom so you can do f , f have a complete story ov of s of some piece of dialogue . and that 's gon na be much more useful than having all of the clausal constructions and nothing else , or like that . so that the trick is going to be t to take this and pick a some lattice of constructions , so some lexical and some phrasal , and , whatever you need in order to , be able to then , by hand , explain , some fraction of the utterances . and so , exactly which ones will partly depend on your research interests and a bunch of other things . grad b: but in terms of the s th level of analysis , these do n't necessarily have to be more complex than like the " out of " construction in the bcp paper where it 's just like , half a page on each one . professor d: correct . v a half a page is what we 'd like . and if there 's something that really requires a lot more than that then it does and we have to do it , grad c: we could sit down and think of the ideal speaker utterances , and two or three that follow each other , so , where we can also , once we have everything up and running , show the tremendous , insane inferencing capabilities of our system . so , as the smartkom people have . this is their standard demo dialogue , which is , what the system survives and nothing but that . , we could also sor have the analogen of o our sample sentences , the ideal sentences where we have complete construction coverage and , they match nicely . so the " how do i get to x ? " , that 's definitely gon na be , a major one . grad c: where is x ? might be another one which is not too complicated . and " tell me something about x . " and hey , that 's already covering eighty percent of the system 's functionality . professor d: ye - right , but it 's not covering eighty percent of the intellectual interest . grad c: no , we can w throw in an " out of film " construction if you want to , but professor d: the th there 's a lot that needs to be done to get this right . ok , i th we done ? grad c: , the action planner guy has wrote has written a p lengthy proposal on how he wants to do the action planning . and i responded to him , also rather lengthy , how he should do the action planning . grad c: yes . and i tacked on a little paragraph about the fact that the whole world calls that module a dis disc dialogue manager , and would n't it make sense to do this here too ? and also rainer m malaka is going to be visiting us shortly , most likely in the beginning of june . grad c: he - he 's just in a conference somewhere and he is just swinging through town . and m making me incapable of going to naacl , for which i had funding . but . no , no pittsburg this year . when is the santa barbara ? grad c: who is going to ? should a lot of people . that 's something i will would enjoy . professor d: , probably we can pay for it . a student rate should n't be very high . so , if we all decide it 's a good idea for you to go then you 'll we 'll pay for it . professor d: i do n't have a feeling one way or the other at the moment , but it probably is . ok , great . ###summary: the main focus of the meeting was firstly on the structure of the belief-net , its decision nodes and the parameters that influence them , and secondly , on the design of the data collection tasks. for the latter , there are already 30 subjects lined up and more are expected to be recruited off campus. it was agreed that making subjects select from categories of tasks , such as "big place" , "service" , etc . could provide a better range of data. the duration of each dialogue will probably be no more than 10 minutes. on the other hand , the organisation of the intermediate nodes of the belief-net and their properties is almost complete , although no conditional probabilities have been inserted yet. these nodes represent decisions that will function as parameters to action calls in the system. their values will either be inferred from the user-system interaction , or -as a last resort- requested directly from the user. finally , as to the semantic and syntactic constructions , work will start with more general and brief descriptions , before moving to exhaustive analysis of at least a subset. similarly , the construction parser that is to be built within a year is expected to be relatively basic , yet robust. as the data collection is ready to start , it was agreed that for the first ten subjects the interaction with the system/instructor will be along the lines of a basic response system. tasks will be divided in categories ( "tour" , "attend" etc ) and subjects are going to be asked to choose no more than one task out of each category . this first run will probably take a couple of weeks , but the first results ( audio files and selected highlights ) will be discussed shortly , in order to decide whether more detail ( complex spatial relationships , temporal planning etc ) should be included in the design or particular constructions be elicited. regarding the completion of the belief-net , the remaining details , mainly the properties of the ontology and discourse nodes , should be added. after building in the conditional probability tables , a working prototype of the net will be ready. finally , the initial work on constructions should focus on a general overview of the dialogues with brief descriptions. further analysis will follow from there in a top-down fashion. although there is an effort to include some of the key features of the belief-net in the design of the data gathering , not all of them can be built in. the tasks that the subjects will have to carry out will be categorised in ways that will indicate eva intentions , however , this approach may limit the variety of possible constructions used within a single category of entities. on the other hand , generating more diverse dialogues may have an adverse effect from a speech recognition perspective. a minor problem has arisen with the laboratory where recordings are supposed to take place , but this is currently being sorted out. as regards the completion of the belief-net , no work has been done on the cpt's yet. finally , it was noted that although a general overview of the pertinent constructions is attainable , no more than ten of them can be analysed in detail with the summer months. a detailed diagram of the eva belief-net was presented and some of the intermediate nodes and their properties were discussed in depth. some of the key features and properties are: "go-there" , which is binary , and defined by the user , situation , ontology and discourse models; "timing" ( current/next tour ); "reason" ( business , sight-seeing , socialising ); "transport"; "length of tour"; "costs"; "entity" ( open , accessible ) etc. the data collection that will provide relevant dialogues is moving along , with thirty subjects already lined up. they will be given a reading task , which will include some german proper names , and a series of tasks from the tourist domain to choose from. in order to get directions , they will then communicate with a computer system and a human operator , using a sketchy map as an aid. a different set of data are already available from the smartkom system and similar sources. a preliminary study using this data has shown that a large number of syntactic and semantic constructions can be derived from a small sample.
2
grad d: and we already got the crash out of the way . it did crash , so i feel much better , earlier . grad d: i did collect an agenda . so i ' m gon na go first . mwa - ha ! it should n't take too long . , so we 're out of digits . we ' ve gone once through the set . , so the only thing i have to do grad d: and pick out the ones that have problems , and either correct them or have them re - read . so we probably have like four or five more forms to be read , to be once through the set . i ' ve also extracted out about an hour 's worth . we have about two hours worth . i extracted out about an hour 's worth which are the f digits with for which whose speaker have speaker forms , have filled out speaker forms . not everyone 's filled out a speaker form . so i extracted one for speakers who have speaker forms and for meetings in which the " key " file and the transcript files are parsable . some of the early key files , it looks like , were done by hand , and so they 're not automatically parsable and i have to go back and fix those . so what that means is we have about an hour of transcribed digits that we can play with . , liz professor f: so you think two hours is the total that we have ? and you think we th , i did n't quite catch all these different things that are not quite right , but you think we 'll be able to retrieve the other hour , reasonably ? grad d: yes , . so it 's just a question of a little hand - editing of some files and then waiting for more people to turn in their speaker forms . i have this web - based speaker form , and i sent mail to everyone who had n't filled out a speaker form , and they 're slowly s trickling in . grad d: it 's for labeling the extracted audio files . by speaker id and microphone type . grad d: no , i spoke with jane about that and we decided that it 's probably not an issue that we edit out any of the errors anyway . right ? so the there are no errors in the digits , you 'll always read the string correctly . so i ca n't imagine why anyone would care . so the other topic with digits is , liz would like to elicit different prosodics , and so we tried last week with them written out in english . and it just did n't work because no one grouped them together . so it just sounded like many more lines instead of anything else . so in conversations with liz and jane we decided that if you wrote them out as numbers instead of words it would elicit more phone number , social security number - like readings . the problem with that is it becomes numbers instead of digits . when i look at this , that first line is " sixty one , sixty two , eighteen , eighty six , ten . " , and so the question is does anyone care ? , i ' ve already spoken with liz and she feels that , correct me if i ' m wrong , that for her , connected numbers is fine , grad d: as opposed to connected digits . , two hours is probably fine for a test set , but it may be a little short if we actually wanna do training and adaptation and all that other . professor f: , do you want different prosodics , so if you always had the same groupings you would n't like that ? is that correct ? professor f: no but , i was asking if that was something you really cared about because if it was n't , it seems to me if you made it really specifically telephone groupings that maybe people would n't , go and do numbers so much . if it 's phd g: i ok i it might help , i would like to g get away from having only one specific grouping . phd g: , so if that 's your question , but it seems to me that , at least for us , we can learn to read them as digits if that 's what people want . i ' m do n't think that 'd be that hard to read them as single digits . phd g: and it seems like that might be better for you guys since then you 'll have just more digit data , phd g: and that 's always a good thing . it 's a little bit better for me too because the digits are easier to recognize . they 're better trained than the numbers . grad d: so we could just , put in the instructions " read them as digits " . phd g: right . right , read them as single digits , so sixty - one w is read as six one , and if people make a mistake we professor f: , the other thing is we could just bag it because it 's it 's - i ' m not worrying about it , because we do have digits training data that we have from ogi . i ' m , digits numbers training that we have from ogi , we ' ve done lots and lots of studies with that . and . professor f: to some extent maybe we could just read them have them read how they read it and it just means that we have to expand our vocabulary out to that we already have . phd g: right . that 's fine with me as long as it 's just that i did n't want to the people who would have been collecting digits the other way to not have the digits . professor f: and then go back to digits for awhile , or . do yo , do you want this do you need training data or adaptation data out of this ? how much of this do you need ? with the phd g: it 's actually unclear right now . thought we 're if we 're collec collecting digits , and adam had said we were running out of the ti forms , it 'd be to have them in groups , and probably , all else being equal , it 'd be better for me to just have single digits since it 's , a recognizer 's gon na do better on those anyway , and it 's more predictable . so we can know from the transcript what the person said and the transcriber , in general . phd g: but if they make mistakes , it 's no big deal if the people say a hundred instead of " one oo " . and also w maybe we can just let them choose " zero " versus " o " as they like because even the same person c sometimes says " o " and sometimes says " zero " in different context , and that 's interesting . so i do n't have a specific need cuz if i did i 'd probably try to collect it , without bothering this group , but if we can try it grad d: ok so just add to the instructions to read it as digits not as connected numbers . phd g: right , and you can give an example like , " six sixty - one would be read as six one " . postdoc e: and i actually it 's no more artificial than what we ' ve been doing with words . phd g: ok i also had a hard time with the words , but then we went back and forth on that . ok , so let 's give that a try grad d: ok . and is the spacing alright or do you think there should be more space between digits and groups ? postdoc e: i it to me it looks like you ' ve got the func the idea of grouping and you have the grou the idea of separation and , it 's just a matter of u i the instructions , that 's all . phd g: righ - right , and you just they 're randomly { nonvocalsound } generated and randomly assigned to digits . professor f: , i was just gon na say , so we have in the vicinity of forty hours of recordings now . and you 're saying two hours , is digits , so that 's roughly the ratio then , something like twenty to one . which i makes sense . so if we did another forty hours of recordings then we could get another couple hours of this . , like you say , a couple hours for a test set 's ok . it 'd be to get , more later because we 'll we might use this up , in some sense , but postdoc e: , i also would like to argue for that cuz it seems to me that , there 's a real strength in having the same test replicated in a whole bunch of times and adding to that basic test bank . ? cuz then you have , more and more , u chances to get away from random errors . and , the other thing too is that right now we have a stratified sample with reference to dialect groups , and it might be there might be an argument to be made for having f for replicating all of the digits that we ' ve done , which were done by non - native speakers so that we have a core that replicates the original data set , which is american speakers , and then we have these stratified additional language groups overlapping certain aspects of the database . grad d: that trying to duplicate , spending too much effort trying to duplicate the existing ti - digits probably is n't too worthwhile because the recording situation is so different . it 's gon na be very hard to be comparable . postdoc e: except that if you have the stimuli comparable , then it says something about the contribution of setting postdoc e: what 's an example of a of m some of the other differences ? any other a difference ? professor f: i individual human glottis is going to be different for each one , it 's just there 's so many things . grad d: the corpus itself . , we 're collecting it in a read digit in a particular list , and i ' m that they 're doing more specific . if i remember correctly it was like postman reading zipcodes and things like that . professor f: , no ti - digits was read in th in read in the studio i believe . grad d: i have n't ever listened to ti - digits . so i do n't really know how it compares . but but regardless it 's gon na it 's hard to compare cross - corpus . professor f: and they 're different circumstances with different recording environment and , so it 's really pretty different . but the idea of using a set thing was just to give you some framework , so that even though you could n't do exact comparisons , it would n't be s valid scientifically at least it 'd give you some frame of reference . , it 's not phd b: hey liz , what what do the groupings represent ? you said there 's like ten different groupings ? phd g: right , just groupings in terms of number of groups in a line , and number of digits in a group , and the pattern of groupings . phd g: , roughly looked at what kinds of digit strings are out there , and they 're usually grouped into either two , three , or four , four digits at a time . and they can have , actually , things are getting longer and longer . in the old days you probably only had three sequences , and telephone numbers were less , and . so , there 's between , if you look at it , there are between like three and five groups , and each one has between two and four groupings i purposely did n't want them to look like they were in any pattern . grad d: and which group appears is picked randomly , and what the numbers are picked randomly . so unlike the previous one , which i d simply replicated ti - digits , this is generated randomly . phd g: but it 'd be great i to be able to compare digits , whether it 's these digits or ti - digits , to speakers , and compare that to their spontaneous speech , and then we do need a fair amount of digit data because you might be wearing a different microphone phd g: and , so it 's to have the digits , replicated many times . especially for speakers that do n't talk a lot . so , for adaptation . no , i ' m serious , phd g: so we have a problem with acoustic adaptation , and we 're not using the digit data now , but phd g: not for adaptation , nope . v w we 're not we were running adaptation only on the data that we ran recognition on and i 'd as soon as someone started to read transcript number , that 's read speech and " , we 're gon na do better on that , phd g: but , it might be fair to use the data for adaptation , so . so those speakers who are very quiet , shy grad d: do you think that would help adapting on , i have a real problem with that . phd g: , it sh it 's the same micropho see we have that in the same meeting , professor f: , for the acoustic research , for the signal - processing , farfield , i see it as the place that we start . but , th , it 'd be to have twenty hours of digits data , but the truth is i ' m hoping that we through the that you guys have been doing as you continue that , we get , the best we can do on the spontaneous , nearfield , and then , we do a lot of the testing of the algorithms on the digits for the farfield , and at some point when we feel it 's mature and we understand what 's going on with it then we have to move on to the spontaneous data with the farfield . so . phd g: the only thing that we do n't have , i know this sounds weird , and maybe it 's completely stupid , but we do n't have any overlapping digits . grad d: the the problem i see with trying to do overlapping digits is the cognitive load . phd g: i ' m just talkin for the that like dan ellis is gon na try , grad d: i if you plug you 're ears you could do it , but then you do n't get the same effects . phd g: , what is actually no not the overlaps that are - governed linguistically , but the actual fact that there is speech coming from two people and the beam - forming stuf all the acoustic that like dan ellis and company want to do . digits are and behaved , phd g: that 's right . i mea i ' m i was serious , but i really , i ' m i do n't feel strongly enough that it 's a good idea , grad d: you do the last line , i 'll do the first line . o . that 's not bad . phd g: so , we could have a round like where you do two at a time , and then the next person picks up when the first guy 's done , . phd g: then it would go like h twice as fast , or a third as fast . phd g: i ' m actually serious if it would help people do that kind o but the people who wanna work on it we should talk to them . professor f: i do n't think we 're gon na collect vast amounts of data that way , professor f: but having a little bit might at least be fun for somebody like dan to play around with , grad d: maybe if we wanted to do that we would do it as a separate session , something like that rather than doing it during a real meeting and , do two people at a time then three people at a time and things like that . phd g: if we have nothing if we have no agenda we could do it some week . postdoc e: c can i have an another question w about this ? so , there are these digits , which are detached digits , but there are other words that contain the same general phon phoneme sequences . like " wonderful " has " one " in it and victor borge had a piece on this where he inflated the digits . , i wonder if there 's , an if there would be a value in having digits that are in essence embedded in real words to compare in terms of like the articulation of " one " in " wonderful " versus " one " as a digit being read . professor f: , they do n't all work as , do they ? what does nine work in ? postdoc e: because , if we were to each read his embedded numbers words in sent in sentences cuz it 's like an entire sketch he does and i would n't take the inflated version . so he talks about the woman being " two - derful " , and a but , if it were to be deflated , just the normal word , it would be like a little story that we could read . professor f: i . the question is what the research is , so , i presume that the reason that you wanted to have these digits this way is because you wanted to actually do some research looking at the prosodic form here . professor f: so if somebody wanted to do that , if they wanted to look at the difference of the phones in the digits in the context of a word versus the digits a non - digit word versus in digit word , that would be a good thing to do , but someone would have to express interest in that . grad d: ok , are we done with digits ? we have asr results from liz , transcript status from jane , and disk space and storage formats from don . does do we have any prefer preference on which way we wanna go ? phd g: i was actually gon na skip the asr results part , in favor of getting the transcription talked about since that 's more important to moving forward , but morgan has this paper copy and if people have questions , it 's pretty preliminary in terms of asr results because we did n't do anything fancy , but e just having the results there , and pointing out some main conclusions like it 's not the speaking style that differs , it 's the fact that there 's overlap that causes recognition errors . and then , the fact that it 's almost all insertion errors , which you would expect but you might also think that in the overlapped regions you would get substitutions and , leads us to believe that doing a better segmentation , like your channel - based segmentation , or some , echo cancellation to get back down to the individual speaker utterances would be probably all that we would need to be able to do good recognition on the close - talking mikes . grad d: , why do n't you , if you have a hard copy , why do n't you email it to the list . professor f: it 's the same thing i mailed to every everybody that w where it was , phd g: so , we , did a lot of work on that and it 's let 's see , th i the other neat thing is it shows for w that the lapel , within speaker is bad . phd g: yes , cuz that 's all that w had been transcribed at the time , but as we i wanted to here more about the transcription . if we can get the channel asynchronous or the closer t that would be very interesting for us professor f: , that 's why i only used the part from use which we had about the alt over all the channels phd g: that 's right . i pulled out a couple classic examples in case you wanna u use them in your talk of phd g: chuck on the lapel , so chuck wore the lapel three out of four times . phd g: , and i wore the lapel once , and for me the lapel was ok . i still and i why . but , phd g: right , but when chuck wore the lapel and morgan was talking there 're a couple really long utterances where chuck is saying a few things inside , and it 's picking up all of morgan 's words pretty and so the rec , there 're error rates because of insertion insertions are n't bounded , so with a one - word utterance and ten insertions you got huge error rate . and that 's where the problems come in . so i this is what we expected , but it 's to be able to show it . and also wanted to mention briefly that , andreas and i called up dan ellis who 's still stuck in switzerland , and we were gon na ask him if there 're , what 's out there in terms of echo cancellation and things like that . not that we were gon na do it , but we wanted to would need to be done . phd g: and he we ' ve given him the data we have so far , so these sychronous cases where there are overlap . and he 's gon na look into trying to run some things that are out there and see how it can do phd g: because right now we 're not able to actually report on recognition in a real paper , like a eurospeech paper , because it would look premature . phd b: so the idea is that you would take this big hunk where somebody 's only speaking a small amount in it , and then try to figure out where they 're speaking based on the other peopl phd g: right . or who 's at any point in time who 's the foreground speaker , who 's the background speaker . phd g: , it would be techniques used from adaptive echo cancellation which i enough about to talk about . phd g: right , and that would be similar to what you 're also trying to do , but using , more than energy i what exactly would go into it . phd g: so the idea is to run this on the whole meeting . and get the locations , which gives you also the time boundaries of the individual speak phd g: right . except that there are many techniques for the kinds of cues , that you can use to do that . professor f: , dave is , also gon na be doin usin playing around with echo cancellation for the nearfield farfield , so we 'll be phd g: and i espen ? this is he here too ? may also be working so it would just be ver that 's really the next step because we ca n't do too much , on term in terms of recognition results knowing that this is a big problem , until we can do that processing . and so , once we have some of yours , phd b: this also ties into one of the things that jane is gon na talk about too . grad d: so i have some naming conventions that we should try to agree on . so let 's do that off - line , grad d: , names in the names to i ds , so you and it does all sorts of matches because the way people filled out names is different on every single file so it does a very fuzzy match . phd g: so at this point we can finalize the naming , and we 're gon na re rewrite out these waveforms that we did because as you notice in the paper your " m o in one meeting and " m o - two " in another meeting and it 's we just need to standardize the grad d: so th i now have a script that you can just say look up morgan , and it will give you his id . grad d: alright . do we don , you had disk space and storage formats . is that something we need to talk about at the meeting , or should you just talk with chuck at some other time ? grad c: , i had some general questions just about the compression algorithms of shortening waveforms and i exactly who to ask . that maybe you would be the person to talk to . so , is it a lossless compression when you compress , so grad c: it just uses entropy coding ? so , i my question would be is got this new eighteen gig drive installed . , which is professor f: , this is an eighteen gig drive , or is it a thirty six gig drive with eighteen grad d: , so the only question is how much of it the distinction between scratch and non - scratch is whether it 's backed up or not . grad d: so what you wanna do is use the scratch for that you can regenerate . so , the that is n't backed up is not a big deal because disks do n't crash very frequently , as long as you can regenerate it . phd g: i 'd leave all the all the transcript should n't should be backed up , but all the waveform sound files should not be backed up , grad c: so , i th the other question was then , should we shorten them , downsample them , or keep them in their original form ? grad d: it just depends on your tools . , because it 's not backed up and it 's just on scratch , if your sc tools ca n't take shortened format , i would leave them expanded , so you do n't have to unshorten them every single time you wanna do anything . phd g: , we get the same performance . the r the front - end on the sri recognizer just downsamples them on the fly , grad c: , i the only argument against downsampling is to preserve just the original files in case we want to experiment with different filtering techniques . phd g: fe you 'd you wanna not . so we 're what we 're doing is we 're writing out , this is just a question . we 're writing out these individual segments , that wherever there 's a time boundary from thilo , or jane 's transcribers , we chop it there . and the reason is so that we can feed it to the recognizer , and throw out ones that we 're not using and . and those are the ones that we 're storing . grad d: , as i said , since that 's it 's regeneratable , what i would do is take downsample it , and compress it however you 're e the sri recognizer wants to take it in . professor f: , i ' m . as , as long as there is a form that we can come from again , that is not downsampled , then , professor f: then it 's fine . but for fu future research we 'll be doing it with different microphone positions and so on phd b: so the sri front - end wo n't take a an a large audio file name and then a list of segments to chop out from that large audio file ? they actually have to be chopped out already ? phd g: , it 's better if they 're chopped out , and it will be , y we could probably write something to do that , but it 's actually convenient to have them chopped out cuz you can run them , in different orders . you c you can actually move them around . phd g: that 's a lot quicker than actually trying to access the wavefile each time , find the time boundaries and so in principle , you could do that , grad d: right , so if you did it that way you would have to generate a program that looks in the database somewhere , extracts out the language , finds the time - marks for that particular one , do it that way . the way they 're doing it , you have that already extracted and it 's embedded in the file name . and so , you just say grad d: y so you just say " asterisk e asterisk dot wave " , and you get what you want . phd g: is and the other part is just that once they 're written out it is a lot faster to process them . grad d: this is all just temporary access , so i do n't it 's all just it 's fine . fine to do it however is convenient . professor f: it just depends how big the file is . if the file sits in memory you can do extremely fast seeks phd g: so we 're also looking at these in waves like for the alignments and . you ca n't load an hour of speech into x waves . you need to s have these small files , and , even for the transcriber program phd g: , if you try to load s really long waveform into x waves , you 'll be waiting there for phd b: no , i ' m not suggesting you load a long wave file , i ' m just saying you give it a start and an end time . and it 'll just go and pull out that section . grad d: i th w the transcribers did n't have any problem with that did they jane ? phd g: we have a problem with that , time - wise on a it - it 's a lot slower to load in a long file , phd g: overall you could get everything to work by accessing the same waveform and trying to find two , the begin and end times . but it 's more efficient , if we have the storage space , to have the small ones . grad d: if we do n't have a spare disk sitting around we go out and we buy ourselves an eighty gigabyte drive and make it all scratch space . , it 's not a big deal . postdoc e: you 're right about the backup being a bottleneck . it 's good to think towards scratch . grad d: so remind me afterward and i 'll and we 'll look at your disk and see where to put . grad c: ok . alright . , i could just u do a du on it right ? and just see which how much is on each grad d: each partition . and you wanna use , either xa or scratch . x question mark , anything starting with x is scratch . postdoc e: ok . so i got a little print - out here . so three on this side , three on this side . and i stapled them . alright so , first of all , there was a an interest in the transcribe transcription , checking procedures and tell you first , to go through the steps although you ' ve probably seen them . , as you might imagine , when you 're dealing with , r really c a fair number of words , and , @ natural speech which means s self - repairs and all these other factors , that there 're lots of things to be , s standardized and streamlined and checked on . and , so , i did a bunch of checks , and the first thing i did was a spell - check . and at that point i discovered certain things like , " accommodate " with one " m " , that thing . and then , in addition to that , i did an exhaustive listing of the forms in the data file , which included n detecting things like f faulty punctuation and things phd b: so you 're doing these so the whole process is that the transcribers get the conversation and they do their pass over it . postdoc e: exactly . i do these checks . and so , i do a an exhaustive listing of the forms actually , i will go through this in order , so if we could maybe and stick keep that for a second cuz we 're not ready for that . postdoc e: , . exactly ! alright so , a spelling check first then an exhaustive listing of the , all the forms in the data with the punctuation attached and at that point i pick up things like , , word followed by two commas . and th and then another check involves , being that every utterance has an identifiable speaker . and if not , then that gets checked . then there 's this issue of glossing s w so - called " spoken - forms " . so there mo for the most part , we 're keeping it standard wo word level transcription . but there 's w and that 's done with the assumption that pronunciation variants can be handled . so for things like " and " , the fact that someone does n't say the " d " , that 's not important enough to capture in the transcription because a good pronunciation , , model would be able to handle that . however , things like " cuz " where you 're lacking an entire very prominent first syllable , and furthermore , it 's a form that 's specific to spoken language , those are r reasons f for those reasons i kept that separate , and used the convention of using " cuz " for that form , however , glossing it so that it 's possible with the script to plug in the full orthographic form for that one , and a couple of others , not many . so " wanna " is another one , going , " gon na " is another one , with just the assumption , again , that this th these are things which it 's not really fair to a c consider expect that a pronunciation model , to handle . and chuck , you in you indicated that " cuz " is one of those that 's handled in a different way also , did n't you ? did i postdoc e: so so it might not have been it might not have been you , but someone told me that " cuz " is treated differently in , i u in this context because of that r reason that , it 's a little bit farther than a pronunciation variant . ok , so after that , let 's see , phd b: so that was part of the spell - check , or was that was after the spell - check ? postdoc e: so when i get the exhau so the spell - check picks up those words because they 're not in the dictionary . so it gets " cuz " and " wanna " and that postdoc e: , - . run it through i have a sed , so i do sed script saying whenever you see " gon na " , " convert it to gon na " , " gloss equals quote going - to quote " , and with all these things being in curly brackets so they 're always distinctive . ok , i also wrote a script which will , retrieve anything in curly brackets , or anything which i ' ve classified as an acronym , and a pronounced acronym . and the way i tag ac pronounced acronyms is that i have underscores between the components . so if it 's " acl " then it 's " a " underscore " c " underscore " l " . grad d: and so your list here , are these ones that actually occurred in the meetings ? phd g: , can i ask a question about the glossing , before we go on ? so , for a word like " because " is it that it 's always predictably " because " ? , is " cuz " always meaning " because " ? postdoc e: yes , but not the reverse . so sometimes people will say " because " in the meeting , and if they actually said " because " , then it 's written as " because " with no w cuz does n't even figure into the equation . professor f: but but in our meetings people do n't say " hey cuz how you doing ? " phd g: the the only problem is that with for the recognition we map it to " because " , and so if we know that " cuz " grad d: you have the gloss form so you always replace it . if that 's how what you wanna do . postdoc e: and don knows this , and he 's bee he has a glo he has a script that postdoc e: on the different types of comments , which we 'll see in just a second . so the pronounceable acronyms get underscores , the things in curly brackets are viewed as comments . there 're comments of four types . so this is a good time to introduce that . the four types . w and maybe we 'll expand that but the comments are , of four types mainly right now . one of them is , the gloss type we just mentioned . grad d: so a are we done with acronyms ? cuz i had a question on what this meant . postdoc e: i ' m still doing the overview . i have n't actually gotten here yet . postdoc e: ok so , gloss is things like replacing the full form u with the , more abbreviated one to the left . , then you have if it 's , there 're a couple different types of elements that can happen that are n't really properly words , and wo some of them are laughs and breathes , so we have that 's prepended with a v a tag of " voc " . postdoc e: and the non - vocal ones are like door - slams and tappings , and that 's prepended with a no non - vocalization . phd b: so then it just an ending curly brace there , or is there something else in there . postdoc e: and then the no non - vocalization would be something like a door - slam . they always end . so it 's like they 're paired curly brackets . and then the third type right now , is m things that fall in the category of comments about what 's happening . so it could be something like , " referring to so - and - so " , talking about such - and - such , , " looking at so - and - so " . phd b: so on the m on the middle t so , in the first case that gloss applies to the word to the left . but in the middle two th - it 's not applying to anything , right ? grad d: the " qual " can be the " qual " is applying to the left . postdoc e: , and actually , it is true that , with respect to " laugh " , there 's another one which is " while laughing " , postdoc e: and that is , i an argument could be made for this tur turning that into a qualitative statement because it 's talking about the thing that preceded it , but at present we have n't been , , coding the exact scope of laughing , and so to have " while laughing " , that it happened somewhere in there which could mean that it occurred separately and following , or , including some of the utterances to the left . have n't been awfully precise about that , but i have here , now we 're about to get to the to this now , i have frequencies . so you 'll see how often these different things occur . but , , the very front page deals with this , final c pa , aspect of the standardization which has to do with the spoken forms like " - " and " ha " and " - " and all these different types . and , , someone pointed out to me , this might have been chuck , about how a recognizer , if it 's looking for " - " with three m 's , and it 's transcribed with two m 's , that it might increase the error rate which is which would really be a shame because , i p i personally w would not be able to make a claim that those are dr dramatically different items . so , right now i ' ve standardized across all the existing data with these spoken forms . postdoc e: all existing data except thirty minutes which got found today . so , i ' m gon na check postdoc e: acsu - actually . i got it was stored in a place i did n't expect , postdoc e: and this is this 'll be great . so i 'll be able to get through that tonight , and then everyth i , actually later today probably . and so then we 'll have everything following these conventions . but you notice it 's really rather a small set of these kinds of things . and i made it so that these are , with a couple exceptions but , things that you would n't find in the spell - checker so that they 'll show up really easily . and , grad c: jane , can i ask you a question ? what 's that very last one correspond to ? i do n't even know how to pronounce that . postdoc e: , . now that s only occurs once , and i ' m thinking of changing that . postdoc e: i have n't heard it actually . i n i need to listen to that one . postdoc e: did she hear the th did she actually hear it ? cuz i have n't heard it . phd g: no , we just gave her a list of words that , were n't in our dictionary and so it picked up like this , and she just did n't listen so she did n't know . we just we 're waiting on that just to do the alignments . postdoc e: i ' m curious to se hear what it is , but i did n't know wanna change it to something else until i knew . grad d: yes , that 's right . we 're gon na have a big problem when we talk about that . grad d: , or if you 're a c programmer . you say arg - c and arg - v all the time . phd g: i have one question about the " eh " versus like the " ah " and the " uh " . postdoc e: that 's partly a nonnative - native thing , but i have found " eh " in native speakers too . but it 's mostly non - native phd g: , right , cuz there were some speakers that did definite " 's " but right now we phd g: so , it 's actually probably good for us to know the difference between the real " and the one that 's just like " or transcribed " aaa " cuz in like in switchboard , you would see e all of these forms , but they all were like " . phd g: or the " uh " , " eh " , " ah " were all the same . and then , we have this additional non - native version of , like " eeh " . grad c: all the " eh " 's i ' ve seen have been like that . they ' ve been like " like that have bee has been transcribed to " eh " . and sometimes it 's stronger , grad d: i ' m just these poor transcribers , they 're gon na hate this meeting . phd g: but you 're a native german speaker so it 's not a issue for it 's only postdoc e: that makes sense . , and so , , th i have there are some , americans who are using this " too , and i have n't listened to it systematically , maybe with some of them , they 'd end up being " 's " but , i my spot - checking has made me think that we do have " in also , american e data represented here . but any case , that 's the this is reduced down from really quite a long a much longer list , postdoc e: functionally pretty , also it was fascinating , i was listening to some of these , i two nights ago , and it 's just hilarious to liste to do a search for the " - 's " . and you get " - " and diff everybody 's doing it . postdoc e: just i wanted to say i w think it would be fun to make a montage of it because there 's a " - . postdoc e: all these different vocal tracts , but it 's the same item . it 's very interesting . , then the acronyms y and the ones in parentheses are ones which the transcriber was n't of , and i have n't been able to listen to clarify , but you can see that the parenthesis convention makes it very easy to find them postdoc e: question mark is punctuation . so it they said that @ , " dc ? " postdoc e: , so the only , and i do have a stress marker here . sometimes the contrastive stress is showing up , and , professor f: i ' m , i got lost here . what - w what 's the difference between the parenthesized acronym and the non - parenthesized ? postdoc e: the parenthesized is something that the transcriber thought was ann , but was n't entirely . so i 'd need to go back or someone needs to go back , and say , yes or no , and then get rid of the parentheses . but the parentheses are used only in that context in the transcripts , of noti noticing that there 's something uncertain . grad d: i know ! i was saying that a lot of them are the networks meeting . postdoc e: nsa , a lot of these are coming from them . i listened to some of that . postdoc e: and robustness has a fair amount , but the nsa group is just very many . postdoc e: right and sometimes , you see a couple of these that are actually " ok 's " so it 's may be that they got to the point where it was low enough understandable understandability that they were n't entirely the person said " ok . " , so it is n't really necessarily a an undecipherable acronym , postdoc e: but just n needs to be double checked . now we get to the comments . this postdoc e: number of times out of the entire database , w except for that last thirty minutes i have n't checked yet . phd a: so what is the difference between " papers rustling " and " rustling papers " ? postdoc e: i 'd have to listen . i w i 'd like to standardize these down farther but , , to me that sounds equivalent . but , i ' m a little hesitant to collapse across categories unless i actually listen to them . phd g: this is exactly how people will prove that these meetings do differ because we 're recording , right ? phd g: y no normally you do n't go around saying , " now you ' ve said it six times . postdoc e: but did you notice that there were seven hundred and eighty five instances of " ok " ? grad c: is this after like did you do some replacements for all the different form of " ok " to this ? postdoc e: of " ok " , yes . so that 's the single existing convention for " ok " . grad c: although , what 's there 's one with a slash after it . that 's disturbing . postdoc e: i actually explicitly looked for that one , and that , i ' m not exactly about that . postdoc e: no , i looked for that , but that does n't actually exist . and it may be , i do n't i ca n't explain that . postdoc e: it 's the only pattern that has a slash after it , and it 's an epiphenomenon . grad d: so i 'll just i was just looking at the bottom of page three there , is that " to be " or " not to be " . postdoc e: there is th one y , no , that 's r that 's legitimate . so now , comments , you can see they 're listed again , same deal , with exhaustive listing of everything found in everything except for these final th thirty minutes . grad d: ok so , on some of these quals , are they really quals , or are they glosses ? so like there 's a " qual tcl " . postdoc e: tcl . where do you see that ? the reason is because w it was said " tickle " . grad d: i see , i see . so it 's not gloss . ok , i see . postdoc e: on the in the actual script in the actual transcript , i s i so this happens in the very first one . i actually wrote it as " tickle " . because we they did n't say " tcl " , they said " tickle " . and then , following that is " qual tcl " . phd g: lan ok - we ok it 's in the language model , w , but it so it 's the pronunciation model that has to have a pronunciation of " tickle " . phd g: what i meant is that there should be a pronunciation " tickle " for tcl as a word . and that word in the in , it stays in the language model wherever it was . you never would put " tickle " in the language model in that form , there 's actually a bunch of cases like this with people 's names and phd b: so how w there 'd be a problem for doing the language modeling then with our transcripts the way they are . phd g: yes . so th there 's a few cases like that where the , the word needs to be spelled out in a consistent way as it would appear in the language , but there 's not very many of these . tcl 's one of them . grad d: right , so y so , whoever 's creating the new models , will have to also go through the transcripts and change them synchronously . phd g: we have this there is this thing i was gon na talk to you about at some point about , what do we do with the dictionary as we 're up updating the dictionary , these changes have to be consistent with what 's in the like spelling people 's names and . if we make a spelling correction to their name , like someone had deborah tannen 's name mispelled , and since we know who that is , we could correct it , phd g: but we need to make we have the mispel if it does n't get corrected we have to have a pronunciation as a mispelled word in the dictionary . things like that . postdoc e: , now the tannen corre the spelling c change . , that 's what gets i picked those up in the frequency check . phd g: so if there 's things that get corrected before we get them , it 's not an issue , but if there 's things that , we change later , then we always have to keep our the dictionary up to date . and then , in the case of " tickle " i we would just have a , word " tcl " which phd g: which normally would be an acronym , " tcl " but just has another pronunciation . postdoc e: icsi is one of those that sometimes people pronounce and sometimes they say " icsi . " so , those that are l are listed in the acronyms , i actually know they were said as letters . the others , e those really do need to be listened to cuz i have n't been able to go to all the ic icsi things , professor f: don and i were just noticing , love this one over on page three , vocal gesture mimicking sound of screwing something into head to hold mike in place . grad d: a lot of these are me the " beep is said with a high pit high pitch and lengthening . " phd g: in the old because he was saying , " how many e 's do i have to allow for ? " postdoc e: that 's been changed . so , exactly , that 's where the lengthening comment c came in . postdoc e: because you see " beep " and it gets kicked out in the spelling , and it also gets kicked out in the , freq frequency listing . i have the there 're various things like " breathe " versus " breath " versus " inhale " and , hhh , i . they do n't have any implications for anything else so it 's like i ' m tempted to leave them for now an and it 's easy enough to find them when they 're in curly brackets . we can always get an exhaustive listing of these things and find them and change them . postdoc e: , but i do n't actually remember what it was . but that was eric did that . grad d: on the glosses for numbers , it seems like there are lots of different ways it 's being done . postdoc e: chuck led to a refinement here which is to add " nums " if these are parts of the read numbers . now you already that i had , in places where they had n't transcribed numbers , i put " numbers " in place of any numbers , but there are places where they , it th this convention came later an and at the very first digits task in some transcripts they actually transcribed numbers . and , d chuck pointed out that this is read speech , and it 's to have the option of ignoring it for certain other prob p , things . and that 's why there 's this other tag here which occurs a hundred and five or three hundred and five times right now which is just n " nums " by itself postdoc e: which means this is part of the numbers task . i may change it to " digits " . , i with the sed command you can really just change it however you want because it 's systematically encoded , ? have to think about what 's the best for the overall purposes , but in any case , " numbers " and " nums " are a part of this digits task thing . , now th then i have these numbers that have quotation marks around them . , i did n't want to put them in as gloss comments because then you get the substitution . and actually , th , the reason i b did it this way was because i initially started out with the other version , you have the numbers and you have the full form and the parentheses , however sometimes people stumble over these numbers they 're saying . so you say , " seve - seventy eight point two " , or whatever . and there 's no way of capturing that if you 're putting the numbers off to the side . you ca n't have the seven and postdoc e: the left is i so example the very first one , it would be , spelled out in words , " point five " . postdoc e: so i this is also spelled out in words . point five . and then , in here , " nums " , so it 's not going to be mistaken as a gloss . it comes out as " nums quote dot five " . grad d: ok now , the other example is , in the glosses right there , gloss one dash one three zero . what what 's to the left of that ? postdoc e: now in that case it 's people saying things like " one dash so - and - so " or they 're saying " two zero " whatever . and in that case , it 's part of the numbers task , and it 's not gon na be included in the read digits anyway , postdoc e: so i m in the there is . i ' ve added that all now too . grad c: there 's a " numbers " tag i ' m i did n't follow that last thing . postdoc e: so , so gloss in the same line that would have " gloss quote one dash one thirty " , you 'd have a gloss at the end of the line saying , " curly bracket nums curly bracket " . so if you did a , a " grep minus v nums " phd g: so that 's the so there would n't be something like i if somebody said something like , " boy , i ' m really tired , ok . " and then started reading that would be on a separate line ? ok great . cuz i was doing the " grep minus v " quick and dirty and looked like that was working ok , phd g: now why do we what 's the reason for having like the point five have the " nums " on it ? is that just like when they 're talking about their data ? postdoc e: these are all these , the " nums point " , this all where they 're saying " point " something or other . postdoc e: and the other thing too is for readability of the transcript . if you 're trying to follow this while you 're reading it 's really hard to read , " so in the data column five has " , " one point five compared to seventy nine point six " , it 's like when you see the words it 's really hard to follow the argument . and this is just really a way of someone who would handle th the data in a more discourse - y way to be able to follow what 's being said . postdoc e: where we 're gon na have a master file of the channelized data . , there will be scripts that are written to convert it into these t these main two uses and th some scripts will take it down th e into a f a for ta take it to a format that 's usable for the recognizer an , other scripts will take it to a form that 's usable for the for linguistics an and discourse analysis . and , the implication that i have is that th the master copy will stay unchanged . these will just be things that are generated , postdoc e: when things change then the script will cham change but the but there wo n't be stored copies of in different versions of things . phd g: so , i 'd have one request here which is just , maybe to make it more robust , th that the tag , whatever you would choose for this type of " nums " where it 's inside the spontaneous speech , is different than the tag that you use for the read speech . phd b: right . that would argue for changing the other ones to be " digits " . phd g: , that way w if we make a mistake parsing , we do n't see the " point five " , or it 's not there , then we a just an and actually for things like " seven eighths " , or people do fractions too i , you maybe you want one overall tag for that would be similar to that , phd g: or as long as they 're sep as they 're different strings that we that 'll make our p processing more robust . cuz we really will get rid of everything that has the " nums " string in it . phd b: i suppose what you could do is just make that you get rid of everything that has " curly brace nums curly brace " . postdoc e: that was that was my motivation . and i these can be changed , like i said . , as i said i was considering changing it to " digits " . and , it just i , it 's just a matter of deciding on whatever it is , and being the scripts know . phd g: it would probably be safer , if you 're willing , to have a separate tag just because , then we know for . and we can also do counts on them without having to do the processing . but you 're right , we could do it this way , it should work . phd g: but it 's probably not hard for a person to tell the difference because one 's in the context of a , a transcribed word string , postdoc e: the other thing is you can get really so minute with these things and increase the size of the files and the re and decrease the readability to such an extent by simply something like " percent " . now i could have adopted a similar convention for " percent " , but somehow percent is not so hard , ? i it 's just when you have these points and you 're trying to figure out where the decimal places are and we could always add it later . percent 's easy to detect . point however is a word that has a couple different meanings . and you 'll find both of those in one of these meetings , where he 's saying " the first point i wanna make is so - and - so " and he goes through four points , and also has all these decimals . phd b: , what does the sri recognizer output for things like that ? seven point five . does it output the word phd g: and and actually , the language it 's the same point , actually , the p , the word " to " and the word y th " going to " and " to go to " those are two different " to 's " and so there 's no distinction there . it 's just the word " point " has , every word has only one , e one version even if it 's a actually even like the word " read " and " read " those are two different words . they 're spelled the same way , right ? and they 're still gon na be transcribed as read . , i like the idea of having this in there , i was a little bit worried that , the tag for removing the read speech because i what if we have like " read letters " or , i , phd g: like " read something " like " read " but other than that i it sounds great . postdoc e: that , thilo requested , that we ge get some segments done by hand to e s reduce the size of the time bins wh like was chuc - chuck was mentioning earlier that , if you said , " and it was in part of a really long , s complex , overlapping segment , that the same start and end times would be held for that one as for the longer utterances , grad d: we did that for one meeting , right , so you have that data do n't you ? postdoc e: and he requested that there be , similar , samples done for five minute stretches c involving a variety of speakers and overlapping secti sections . he gave me he did the very , he did some shopping through the data and found segments that would be useful . and at this point , all four of the ones that he specified have been done . in addition the i ' ve i have the transcribers expanding the amount that they 're doing actually . postdoc e: so right now , i know that as of today we got an extra fifteen minutes of that type , and i ' m having them expand the realm on either side of these places where they ' ve already started . but if , and i and he 's gon na give me some more sections that he thinks would be useful for this purpose . because it 's true , if we could do the more fine grained tuning of this , using an algorithm , that would be so much more efficient . and , . so this is gon na be useful to expand this . phd a: so we sh perhaps we should try to start with those channelized versions just to try it . give it give one tr transcriber the channelized version of my speech - nonspeech detection and look if that 's helpful for them , or just let them try if that 's better or if they if they can postdoc e: that 'd be excellent . , that 'd be really great . as it stands we 're still in the phase of , cleaning up the existing data getting things , in i m more tight tightly time , aligned . i also wanna tell , i also wanted to r raise the issue that ok so , there 's this idea we 're gon na have this master copy of the transcript , it 's gon na be modified by scripts t into these two different functions . and actually the master postdoc e: two two or more . and that the master is gon na be the channelized version . so right now we ' ve taken this i initial one , it was a single channel the way it was input . and now , to the advances made in the interface , we can from now on use the channelized part , and , any changes that are made get made in the channelized version thing . but i wanted to get all the finished all the checks grad c: so , have those e the vis the ten hours that have been transcribed already , have those been channelized ? and i know i ' ve seen @ i ' ve seen they ' ve been channelized , grad c: have they been has the time have the time markings been adjusted , p on a per channel postdoc e: , for a total of like twenty m f for a total of let 's see , four times total of about an thirty minutes . that 's that 's been the case . grad c: i , i if we should talk about this now , or not , but i grad c: , i know . no , but my question is like should i until all of those are processed , and channelized , like the time markings are adjusted before i do all the processing , and we start like branching off into the our layer of transcripts . postdoc e: , the problem is that some of the adjustments that they 're making are to bring are to combine bins that were time bins which were previously separate . and the reason they do that is sometimes there 's a word that 's cut off . and so , i it 's true that it 's likely to be adjusted in the way that the words are more complete . grad c: no i know that adjusting those things are gon na is gon na make it better . postdoc e: so i it 's gon na be a more reliable thing and i ' m not grad c: i ' m about that , but do you have like a time frame when you can expect like all of it to be done , or when you expect them to finish it , or postdoc e: partly it depends on how , how e effective it will be to apply an algorithm because i this takes time , it takes a couple hours t to do , ten minutes . phd b: so right now the what you 're doing is you 're taking the , the o original version and you 're channelizing yourself , right ? grad c: . i ' m doing it myself . i if the time markings are n't different across channels , like the channelized version really does n't have any more information . so , i was just , originally i had done before like the channelized versions were coming out . phd b: so i th probably the way it 'll go is that , when we make this first general version and then start working on the script , that script @ that will be ma primarily come from what you ' ve done , we 'll need to work on a channelized version of those originals . phd b: and so it should be identical to what you have t except for the one that they ' ve already tightened the boundaries on . , and then probably what will happen is as the transcribers finish tightening more and more , that original version will get updated and then we 'll rerun the script and produce better versions . but the i the ef the effect for you guys , because you 're pulling out the little wave forms into separate ones , that would mean these boundaries are constantly changing you 'd have to constantly re rerun that , so , maybe phd g: the harder part is making that the transc the transcription so if you b merge two things , then that it 's the sum of the transcripts , but if you split inside something , you do n't where the word which words moved . and that 's wh that 's where it becomes a little bit , having to rerun the processing . the cutting of the waveforms is pretty trivial . grad c: as long as it can all be done automatically , then that 's not a concern . , if have to run three scripts to extract it all and let it run on my computer for an hour and a half , or however long it takes to parse and create all the reference file , that 's not a problem . so . as long as we 're at that point . and i know exactly like what the steps will work what 's going on , in the editing process , postdoc e: so that 's i could there were other checks that i did , but it 's that we ' ve unless you think there 's anything else , that i ' ve covered it .
the berkeley meeting recorder group discussed digits data , recent asr results , the status of transcriptions , and disk space and storage format issues. approximately two hours of digits have been recorded , half of which have been extracted. researchers doing asr are looking into methods for generating a better channel-based segmentation to improve recognition results for close-talking microphone data. transcription checking procedures were reviewed , and efforts to coordinate the channelization and presegmention of data with the tightening of time bins were discussed. the group also talked about downsampling and strategies for coping with low disk space. digits forms will instruct speakers to read digits separately and not as connected numbers. a tentative decision was made to collect overlapping digits from speakers. transcribers will be given channelized data that has been segmented for speech/non-speech boundaries to determine whether such pre-processing facilitates the transcription process. participants have been slow in returning the relevant forms necessary for matching digits data with speaker ids . forms containing digits that were written out in english were unsuccessful for obtaining desired prosodic groupings. acoustic adaptation is problematic for speakers who seldom talk during meetings. speaker overlap causes recognition errors. the lapel microphone is problematic as it picks up overlapping background speech. more disk space is needed. loading long waveforms in x waves is very time consuming. the dictionary must be updated with forms introduced as part of speaker fe008's set of transcription conventions. the ongoing tightening of boundaries on time bins causes segment boundaries to change , indicating potential problems for other ongoing processing tasks. a test set of digits data totalling two hours is nearly complete. digit extraction has been performed on roughly half of this data. future work may involve experimenting with the reading of digits in different prosodic groupings. preliminary asr results were discussed. subsequent efforts will involve generating a better channel-based segmentation , and possibly doing echo cancellation , to improve recognition on the close-talking microphone data. filename conventions are being standardized. transcription checking procedures have been formalized , including a spell check , producing an exhaustive list of forms identified in the data , attributing every utterance to the appropriate speaker id , glossing spoken forms with their full orthographic counterparts , e.g . 'cuz' and 'because' , transcribing acronyms , and encoding comments , i.e . glosses , vocalic and non-vocalic non-speech events , pragmatic cues , and the standardization of spoken forms , e.g . 'mm-hmm'. scripts will be used to convert a channelized master copy into forms that are appropriate for doing both recognition and linguistic analysis. coordinated efforts are in progress between the transcriber pool and speaker mn014 to fine-tune presegmented output to handle data from a variety of speakers and regions of speaker overlap.
###dialogue: grad d: and we already got the crash out of the way . it did crash , so i feel much better , earlier . grad d: i did collect an agenda . so i ' m gon na go first . mwa - ha ! it should n't take too long . , so we 're out of digits . we ' ve gone once through the set . , so the only thing i have to do grad d: and pick out the ones that have problems , and either correct them or have them re - read . so we probably have like four or five more forms to be read , to be once through the set . i ' ve also extracted out about an hour 's worth . we have about two hours worth . i extracted out about an hour 's worth which are the f digits with for which whose speaker have speaker forms , have filled out speaker forms . not everyone 's filled out a speaker form . so i extracted one for speakers who have speaker forms and for meetings in which the " key " file and the transcript files are parsable . some of the early key files , it looks like , were done by hand , and so they 're not automatically parsable and i have to go back and fix those . so what that means is we have about an hour of transcribed digits that we can play with . , liz professor f: so you think two hours is the total that we have ? and you think we th , i did n't quite catch all these different things that are not quite right , but you think we 'll be able to retrieve the other hour , reasonably ? grad d: yes , . so it 's just a question of a little hand - editing of some files and then waiting for more people to turn in their speaker forms . i have this web - based speaker form , and i sent mail to everyone who had n't filled out a speaker form , and they 're slowly s trickling in . grad d: it 's for labeling the extracted audio files . by speaker id and microphone type . grad d: no , i spoke with jane about that and we decided that it 's probably not an issue that we edit out any of the errors anyway . right ? so the there are no errors in the digits , you 'll always read the string correctly . so i ca n't imagine why anyone would care . so the other topic with digits is , liz would like to elicit different prosodics , and so we tried last week with them written out in english . and it just did n't work because no one grouped them together . so it just sounded like many more lines instead of anything else . so in conversations with liz and jane we decided that if you wrote them out as numbers instead of words it would elicit more phone number , social security number - like readings . the problem with that is it becomes numbers instead of digits . when i look at this , that first line is " sixty one , sixty two , eighteen , eighty six , ten . " , and so the question is does anyone care ? , i ' ve already spoken with liz and she feels that , correct me if i ' m wrong , that for her , connected numbers is fine , grad d: as opposed to connected digits . , two hours is probably fine for a test set , but it may be a little short if we actually wanna do training and adaptation and all that other . professor f: , do you want different prosodics , so if you always had the same groupings you would n't like that ? is that correct ? professor f: no but , i was asking if that was something you really cared about because if it was n't , it seems to me if you made it really specifically telephone groupings that maybe people would n't , go and do numbers so much . if it 's phd g: i ok i it might help , i would like to g get away from having only one specific grouping . phd g: , so if that 's your question , but it seems to me that , at least for us , we can learn to read them as digits if that 's what people want . i ' m do n't think that 'd be that hard to read them as single digits . phd g: and it seems like that might be better for you guys since then you 'll have just more digit data , phd g: and that 's always a good thing . it 's a little bit better for me too because the digits are easier to recognize . they 're better trained than the numbers . grad d: so we could just , put in the instructions " read them as digits " . phd g: right . right , read them as single digits , so sixty - one w is read as six one , and if people make a mistake we professor f: , the other thing is we could just bag it because it 's it 's - i ' m not worrying about it , because we do have digits training data that we have from ogi . i ' m , digits numbers training that we have from ogi , we ' ve done lots and lots of studies with that . and . professor f: to some extent maybe we could just read them have them read how they read it and it just means that we have to expand our vocabulary out to that we already have . phd g: right . that 's fine with me as long as it 's just that i did n't want to the people who would have been collecting digits the other way to not have the digits . professor f: and then go back to digits for awhile , or . do yo , do you want this do you need training data or adaptation data out of this ? how much of this do you need ? with the phd g: it 's actually unclear right now . thought we 're if we 're collec collecting digits , and adam had said we were running out of the ti forms , it 'd be to have them in groups , and probably , all else being equal , it 'd be better for me to just have single digits since it 's , a recognizer 's gon na do better on those anyway , and it 's more predictable . so we can know from the transcript what the person said and the transcriber , in general . phd g: but if they make mistakes , it 's no big deal if the people say a hundred instead of " one oo " . and also w maybe we can just let them choose " zero " versus " o " as they like because even the same person c sometimes says " o " and sometimes says " zero " in different context , and that 's interesting . so i do n't have a specific need cuz if i did i 'd probably try to collect it , without bothering this group , but if we can try it grad d: ok so just add to the instructions to read it as digits not as connected numbers . phd g: right , and you can give an example like , " six sixty - one would be read as six one " . postdoc e: and i actually it 's no more artificial than what we ' ve been doing with words . phd g: ok i also had a hard time with the words , but then we went back and forth on that . ok , so let 's give that a try grad d: ok . and is the spacing alright or do you think there should be more space between digits and groups ? postdoc e: i it to me it looks like you ' ve got the func the idea of grouping and you have the grou the idea of separation and , it 's just a matter of u i the instructions , that 's all . phd g: righ - right , and you just they 're randomly { nonvocalsound } generated and randomly assigned to digits . professor f: , i was just gon na say , so we have in the vicinity of forty hours of recordings now . and you 're saying two hours , is digits , so that 's roughly the ratio then , something like twenty to one . which i makes sense . so if we did another forty hours of recordings then we could get another couple hours of this . , like you say , a couple hours for a test set 's ok . it 'd be to get , more later because we 'll we might use this up , in some sense , but postdoc e: , i also would like to argue for that cuz it seems to me that , there 's a real strength in having the same test replicated in a whole bunch of times and adding to that basic test bank . ? cuz then you have , more and more , u chances to get away from random errors . and , the other thing too is that right now we have a stratified sample with reference to dialect groups , and it might be there might be an argument to be made for having f for replicating all of the digits that we ' ve done , which were done by non - native speakers so that we have a core that replicates the original data set , which is american speakers , and then we have these stratified additional language groups overlapping certain aspects of the database . grad d: that trying to duplicate , spending too much effort trying to duplicate the existing ti - digits probably is n't too worthwhile because the recording situation is so different . it 's gon na be very hard to be comparable . postdoc e: except that if you have the stimuli comparable , then it says something about the contribution of setting postdoc e: what 's an example of a of m some of the other differences ? any other a difference ? professor f: i individual human glottis is going to be different for each one , it 's just there 's so many things . grad d: the corpus itself . , we 're collecting it in a read digit in a particular list , and i ' m that they 're doing more specific . if i remember correctly it was like postman reading zipcodes and things like that . professor f: , no ti - digits was read in th in read in the studio i believe . grad d: i have n't ever listened to ti - digits . so i do n't really know how it compares . but but regardless it 's gon na it 's hard to compare cross - corpus . professor f: and they 're different circumstances with different recording environment and , so it 's really pretty different . but the idea of using a set thing was just to give you some framework , so that even though you could n't do exact comparisons , it would n't be s valid scientifically at least it 'd give you some frame of reference . , it 's not phd b: hey liz , what what do the groupings represent ? you said there 's like ten different groupings ? phd g: right , just groupings in terms of number of groups in a line , and number of digits in a group , and the pattern of groupings . phd g: , roughly looked at what kinds of digit strings are out there , and they 're usually grouped into either two , three , or four , four digits at a time . and they can have , actually , things are getting longer and longer . in the old days you probably only had three sequences , and telephone numbers were less , and . so , there 's between , if you look at it , there are between like three and five groups , and each one has between two and four groupings i purposely did n't want them to look like they were in any pattern . grad d: and which group appears is picked randomly , and what the numbers are picked randomly . so unlike the previous one , which i d simply replicated ti - digits , this is generated randomly . phd g: but it 'd be great i to be able to compare digits , whether it 's these digits or ti - digits , to speakers , and compare that to their spontaneous speech , and then we do need a fair amount of digit data because you might be wearing a different microphone phd g: and , so it 's to have the digits , replicated many times . especially for speakers that do n't talk a lot . so , for adaptation . no , i ' m serious , phd g: so we have a problem with acoustic adaptation , and we 're not using the digit data now , but phd g: not for adaptation , nope . v w we 're not we were running adaptation only on the data that we ran recognition on and i 'd as soon as someone started to read transcript number , that 's read speech and " , we 're gon na do better on that , phd g: but , it might be fair to use the data for adaptation , so . so those speakers who are very quiet , shy grad d: do you think that would help adapting on , i have a real problem with that . phd g: , it sh it 's the same micropho see we have that in the same meeting , professor f: , for the acoustic research , for the signal - processing , farfield , i see it as the place that we start . but , th , it 'd be to have twenty hours of digits data , but the truth is i ' m hoping that we through the that you guys have been doing as you continue that , we get , the best we can do on the spontaneous , nearfield , and then , we do a lot of the testing of the algorithms on the digits for the farfield , and at some point when we feel it 's mature and we understand what 's going on with it then we have to move on to the spontaneous data with the farfield . so . phd g: the only thing that we do n't have , i know this sounds weird , and maybe it 's completely stupid , but we do n't have any overlapping digits . grad d: the the problem i see with trying to do overlapping digits is the cognitive load . phd g: i ' m just talkin for the that like dan ellis is gon na try , grad d: i if you plug you 're ears you could do it , but then you do n't get the same effects . phd g: , what is actually no not the overlaps that are - governed linguistically , but the actual fact that there is speech coming from two people and the beam - forming stuf all the acoustic that like dan ellis and company want to do . digits are and behaved , phd g: that 's right . i mea i ' m i was serious , but i really , i ' m i do n't feel strongly enough that it 's a good idea , grad d: you do the last line , i 'll do the first line . o . that 's not bad . phd g: so , we could have a round like where you do two at a time , and then the next person picks up when the first guy 's done , . phd g: then it would go like h twice as fast , or a third as fast . phd g: i ' m actually serious if it would help people do that kind o but the people who wanna work on it we should talk to them . professor f: i do n't think we 're gon na collect vast amounts of data that way , professor f: but having a little bit might at least be fun for somebody like dan to play around with , grad d: maybe if we wanted to do that we would do it as a separate session , something like that rather than doing it during a real meeting and , do two people at a time then three people at a time and things like that . phd g: if we have nothing if we have no agenda we could do it some week . postdoc e: c can i have an another question w about this ? so , there are these digits , which are detached digits , but there are other words that contain the same general phon phoneme sequences . like " wonderful " has " one " in it and victor borge had a piece on this where he inflated the digits . , i wonder if there 's , an if there would be a value in having digits that are in essence embedded in real words to compare in terms of like the articulation of " one " in " wonderful " versus " one " as a digit being read . professor f: , they do n't all work as , do they ? what does nine work in ? postdoc e: because , if we were to each read his embedded numbers words in sent in sentences cuz it 's like an entire sketch he does and i would n't take the inflated version . so he talks about the woman being " two - derful " , and a but , if it were to be deflated , just the normal word , it would be like a little story that we could read . professor f: i . the question is what the research is , so , i presume that the reason that you wanted to have these digits this way is because you wanted to actually do some research looking at the prosodic form here . professor f: so if somebody wanted to do that , if they wanted to look at the difference of the phones in the digits in the context of a word versus the digits a non - digit word versus in digit word , that would be a good thing to do , but someone would have to express interest in that . grad d: ok , are we done with digits ? we have asr results from liz , transcript status from jane , and disk space and storage formats from don . does do we have any prefer preference on which way we wanna go ? phd g: i was actually gon na skip the asr results part , in favor of getting the transcription talked about since that 's more important to moving forward , but morgan has this paper copy and if people have questions , it 's pretty preliminary in terms of asr results because we did n't do anything fancy , but e just having the results there , and pointing out some main conclusions like it 's not the speaking style that differs , it 's the fact that there 's overlap that causes recognition errors . and then , the fact that it 's almost all insertion errors , which you would expect but you might also think that in the overlapped regions you would get substitutions and , leads us to believe that doing a better segmentation , like your channel - based segmentation , or some , echo cancellation to get back down to the individual speaker utterances would be probably all that we would need to be able to do good recognition on the close - talking mikes . grad d: , why do n't you , if you have a hard copy , why do n't you email it to the list . professor f: it 's the same thing i mailed to every everybody that w where it was , phd g: so , we , did a lot of work on that and it 's let 's see , th i the other neat thing is it shows for w that the lapel , within speaker is bad . phd g: yes , cuz that 's all that w had been transcribed at the time , but as we i wanted to here more about the transcription . if we can get the channel asynchronous or the closer t that would be very interesting for us professor f: , that 's why i only used the part from use which we had about the alt over all the channels phd g: that 's right . i pulled out a couple classic examples in case you wanna u use them in your talk of phd g: chuck on the lapel , so chuck wore the lapel three out of four times . phd g: , and i wore the lapel once , and for me the lapel was ok . i still and i why . but , phd g: right , but when chuck wore the lapel and morgan was talking there 're a couple really long utterances where chuck is saying a few things inside , and it 's picking up all of morgan 's words pretty and so the rec , there 're error rates because of insertion insertions are n't bounded , so with a one - word utterance and ten insertions you got huge error rate . and that 's where the problems come in . so i this is what we expected , but it 's to be able to show it . and also wanted to mention briefly that , andreas and i called up dan ellis who 's still stuck in switzerland , and we were gon na ask him if there 're , what 's out there in terms of echo cancellation and things like that . not that we were gon na do it , but we wanted to would need to be done . phd g: and he we ' ve given him the data we have so far , so these sychronous cases where there are overlap . and he 's gon na look into trying to run some things that are out there and see how it can do phd g: because right now we 're not able to actually report on recognition in a real paper , like a eurospeech paper , because it would look premature . phd b: so the idea is that you would take this big hunk where somebody 's only speaking a small amount in it , and then try to figure out where they 're speaking based on the other peopl phd g: right . or who 's at any point in time who 's the foreground speaker , who 's the background speaker . phd g: , it would be techniques used from adaptive echo cancellation which i enough about to talk about . phd g: right , and that would be similar to what you 're also trying to do , but using , more than energy i what exactly would go into it . phd g: so the idea is to run this on the whole meeting . and get the locations , which gives you also the time boundaries of the individual speak phd g: right . except that there are many techniques for the kinds of cues , that you can use to do that . professor f: , dave is , also gon na be doin usin playing around with echo cancellation for the nearfield farfield , so we 'll be phd g: and i espen ? this is he here too ? may also be working so it would just be ver that 's really the next step because we ca n't do too much , on term in terms of recognition results knowing that this is a big problem , until we can do that processing . and so , once we have some of yours , phd b: this also ties into one of the things that jane is gon na talk about too . grad d: so i have some naming conventions that we should try to agree on . so let 's do that off - line , grad d: , names in the names to i ds , so you and it does all sorts of matches because the way people filled out names is different on every single file so it does a very fuzzy match . phd g: so at this point we can finalize the naming , and we 're gon na re rewrite out these waveforms that we did because as you notice in the paper your " m o in one meeting and " m o - two " in another meeting and it 's we just need to standardize the grad d: so th i now have a script that you can just say look up morgan , and it will give you his id . grad d: alright . do we don , you had disk space and storage formats . is that something we need to talk about at the meeting , or should you just talk with chuck at some other time ? grad c: , i had some general questions just about the compression algorithms of shortening waveforms and i exactly who to ask . that maybe you would be the person to talk to . so , is it a lossless compression when you compress , so grad c: it just uses entropy coding ? so , i my question would be is got this new eighteen gig drive installed . , which is professor f: , this is an eighteen gig drive , or is it a thirty six gig drive with eighteen grad d: , so the only question is how much of it the distinction between scratch and non - scratch is whether it 's backed up or not . grad d: so what you wanna do is use the scratch for that you can regenerate . so , the that is n't backed up is not a big deal because disks do n't crash very frequently , as long as you can regenerate it . phd g: i 'd leave all the all the transcript should n't should be backed up , but all the waveform sound files should not be backed up , grad c: so , i th the other question was then , should we shorten them , downsample them , or keep them in their original form ? grad d: it just depends on your tools . , because it 's not backed up and it 's just on scratch , if your sc tools ca n't take shortened format , i would leave them expanded , so you do n't have to unshorten them every single time you wanna do anything . phd g: , we get the same performance . the r the front - end on the sri recognizer just downsamples them on the fly , grad c: , i the only argument against downsampling is to preserve just the original files in case we want to experiment with different filtering techniques . phd g: fe you 'd you wanna not . so we 're what we 're doing is we 're writing out , this is just a question . we 're writing out these individual segments , that wherever there 's a time boundary from thilo , or jane 's transcribers , we chop it there . and the reason is so that we can feed it to the recognizer , and throw out ones that we 're not using and . and those are the ones that we 're storing . grad d: , as i said , since that 's it 's regeneratable , what i would do is take downsample it , and compress it however you 're e the sri recognizer wants to take it in . professor f: , i ' m . as , as long as there is a form that we can come from again , that is not downsampled , then , professor f: then it 's fine . but for fu future research we 'll be doing it with different microphone positions and so on phd b: so the sri front - end wo n't take a an a large audio file name and then a list of segments to chop out from that large audio file ? they actually have to be chopped out already ? phd g: , it 's better if they 're chopped out , and it will be , y we could probably write something to do that , but it 's actually convenient to have them chopped out cuz you can run them , in different orders . you c you can actually move them around . phd g: that 's a lot quicker than actually trying to access the wavefile each time , find the time boundaries and so in principle , you could do that , grad d: right , so if you did it that way you would have to generate a program that looks in the database somewhere , extracts out the language , finds the time - marks for that particular one , do it that way . the way they 're doing it , you have that already extracted and it 's embedded in the file name . and so , you just say grad d: y so you just say " asterisk e asterisk dot wave " , and you get what you want . phd g: is and the other part is just that once they 're written out it is a lot faster to process them . grad d: this is all just temporary access , so i do n't it 's all just it 's fine . fine to do it however is convenient . professor f: it just depends how big the file is . if the file sits in memory you can do extremely fast seeks phd g: so we 're also looking at these in waves like for the alignments and . you ca n't load an hour of speech into x waves . you need to s have these small files , and , even for the transcriber program phd g: , if you try to load s really long waveform into x waves , you 'll be waiting there for phd b: no , i ' m not suggesting you load a long wave file , i ' m just saying you give it a start and an end time . and it 'll just go and pull out that section . grad d: i th w the transcribers did n't have any problem with that did they jane ? phd g: we have a problem with that , time - wise on a it - it 's a lot slower to load in a long file , phd g: overall you could get everything to work by accessing the same waveform and trying to find two , the begin and end times . but it 's more efficient , if we have the storage space , to have the small ones . grad d: if we do n't have a spare disk sitting around we go out and we buy ourselves an eighty gigabyte drive and make it all scratch space . , it 's not a big deal . postdoc e: you 're right about the backup being a bottleneck . it 's good to think towards scratch . grad d: so remind me afterward and i 'll and we 'll look at your disk and see where to put . grad c: ok . alright . , i could just u do a du on it right ? and just see which how much is on each grad d: each partition . and you wanna use , either xa or scratch . x question mark , anything starting with x is scratch . postdoc e: ok . so i got a little print - out here . so three on this side , three on this side . and i stapled them . alright so , first of all , there was a an interest in the transcribe transcription , checking procedures and tell you first , to go through the steps although you ' ve probably seen them . , as you might imagine , when you 're dealing with , r really c a fair number of words , and , @ natural speech which means s self - repairs and all these other factors , that there 're lots of things to be , s standardized and streamlined and checked on . and , so , i did a bunch of checks , and the first thing i did was a spell - check . and at that point i discovered certain things like , " accommodate " with one " m " , that thing . and then , in addition to that , i did an exhaustive listing of the forms in the data file , which included n detecting things like f faulty punctuation and things phd b: so you 're doing these so the whole process is that the transcribers get the conversation and they do their pass over it . postdoc e: exactly . i do these checks . and so , i do a an exhaustive listing of the forms actually , i will go through this in order , so if we could maybe and stick keep that for a second cuz we 're not ready for that . postdoc e: , . exactly ! alright so , a spelling check first then an exhaustive listing of the , all the forms in the data with the punctuation attached and at that point i pick up things like , , word followed by two commas . and th and then another check involves , being that every utterance has an identifiable speaker . and if not , then that gets checked . then there 's this issue of glossing s w so - called " spoken - forms " . so there mo for the most part , we 're keeping it standard wo word level transcription . but there 's w and that 's done with the assumption that pronunciation variants can be handled . so for things like " and " , the fact that someone does n't say the " d " , that 's not important enough to capture in the transcription because a good pronunciation , , model would be able to handle that . however , things like " cuz " where you 're lacking an entire very prominent first syllable , and furthermore , it 's a form that 's specific to spoken language , those are r reasons f for those reasons i kept that separate , and used the convention of using " cuz " for that form , however , glossing it so that it 's possible with the script to plug in the full orthographic form for that one , and a couple of others , not many . so " wanna " is another one , going , " gon na " is another one , with just the assumption , again , that this th these are things which it 's not really fair to a c consider expect that a pronunciation model , to handle . and chuck , you in you indicated that " cuz " is one of those that 's handled in a different way also , did n't you ? did i postdoc e: so so it might not have been it might not have been you , but someone told me that " cuz " is treated differently in , i u in this context because of that r reason that , it 's a little bit farther than a pronunciation variant . ok , so after that , let 's see , phd b: so that was part of the spell - check , or was that was after the spell - check ? postdoc e: so when i get the exhau so the spell - check picks up those words because they 're not in the dictionary . so it gets " cuz " and " wanna " and that postdoc e: , - . run it through i have a sed , so i do sed script saying whenever you see " gon na " , " convert it to gon na " , " gloss equals quote going - to quote " , and with all these things being in curly brackets so they 're always distinctive . ok , i also wrote a script which will , retrieve anything in curly brackets , or anything which i ' ve classified as an acronym , and a pronounced acronym . and the way i tag ac pronounced acronyms is that i have underscores between the components . so if it 's " acl " then it 's " a " underscore " c " underscore " l " . grad d: and so your list here , are these ones that actually occurred in the meetings ? phd g: , can i ask a question about the glossing , before we go on ? so , for a word like " because " is it that it 's always predictably " because " ? , is " cuz " always meaning " because " ? postdoc e: yes , but not the reverse . so sometimes people will say " because " in the meeting , and if they actually said " because " , then it 's written as " because " with no w cuz does n't even figure into the equation . professor f: but but in our meetings people do n't say " hey cuz how you doing ? " phd g: the the only problem is that with for the recognition we map it to " because " , and so if we know that " cuz " grad d: you have the gloss form so you always replace it . if that 's how what you wanna do . postdoc e: and don knows this , and he 's bee he has a glo he has a script that postdoc e: on the different types of comments , which we 'll see in just a second . so the pronounceable acronyms get underscores , the things in curly brackets are viewed as comments . there 're comments of four types . so this is a good time to introduce that . the four types . w and maybe we 'll expand that but the comments are , of four types mainly right now . one of them is , the gloss type we just mentioned . grad d: so a are we done with acronyms ? cuz i had a question on what this meant . postdoc e: i ' m still doing the overview . i have n't actually gotten here yet . postdoc e: ok so , gloss is things like replacing the full form u with the , more abbreviated one to the left . , then you have if it 's , there 're a couple different types of elements that can happen that are n't really properly words , and wo some of them are laughs and breathes , so we have that 's prepended with a v a tag of " voc " . postdoc e: and the non - vocal ones are like door - slams and tappings , and that 's prepended with a no non - vocalization . phd b: so then it just an ending curly brace there , or is there something else in there . postdoc e: and then the no non - vocalization would be something like a door - slam . they always end . so it 's like they 're paired curly brackets . and then the third type right now , is m things that fall in the category of comments about what 's happening . so it could be something like , " referring to so - and - so " , talking about such - and - such , , " looking at so - and - so " . phd b: so on the m on the middle t so , in the first case that gloss applies to the word to the left . but in the middle two th - it 's not applying to anything , right ? grad d: the " qual " can be the " qual " is applying to the left . postdoc e: , and actually , it is true that , with respect to " laugh " , there 's another one which is " while laughing " , postdoc e: and that is , i an argument could be made for this tur turning that into a qualitative statement because it 's talking about the thing that preceded it , but at present we have n't been , , coding the exact scope of laughing , and so to have " while laughing " , that it happened somewhere in there which could mean that it occurred separately and following , or , including some of the utterances to the left . have n't been awfully precise about that , but i have here , now we 're about to get to the to this now , i have frequencies . so you 'll see how often these different things occur . but , , the very front page deals with this , final c pa , aspect of the standardization which has to do with the spoken forms like " - " and " ha " and " - " and all these different types . and , , someone pointed out to me , this might have been chuck , about how a recognizer , if it 's looking for " - " with three m 's , and it 's transcribed with two m 's , that it might increase the error rate which is which would really be a shame because , i p i personally w would not be able to make a claim that those are dr dramatically different items . so , right now i ' ve standardized across all the existing data with these spoken forms . postdoc e: all existing data except thirty minutes which got found today . so , i ' m gon na check postdoc e: acsu - actually . i got it was stored in a place i did n't expect , postdoc e: and this is this 'll be great . so i 'll be able to get through that tonight , and then everyth i , actually later today probably . and so then we 'll have everything following these conventions . but you notice it 's really rather a small set of these kinds of things . and i made it so that these are , with a couple exceptions but , things that you would n't find in the spell - checker so that they 'll show up really easily . and , grad c: jane , can i ask you a question ? what 's that very last one correspond to ? i do n't even know how to pronounce that . postdoc e: , . now that s only occurs once , and i ' m thinking of changing that . postdoc e: i have n't heard it actually . i n i need to listen to that one . postdoc e: did she hear the th did she actually hear it ? cuz i have n't heard it . phd g: no , we just gave her a list of words that , were n't in our dictionary and so it picked up like this , and she just did n't listen so she did n't know . we just we 're waiting on that just to do the alignments . postdoc e: i ' m curious to se hear what it is , but i did n't know wanna change it to something else until i knew . grad d: yes , that 's right . we 're gon na have a big problem when we talk about that . grad d: , or if you 're a c programmer . you say arg - c and arg - v all the time . phd g: i have one question about the " eh " versus like the " ah " and the " uh " . postdoc e: that 's partly a nonnative - native thing , but i have found " eh " in native speakers too . but it 's mostly non - native phd g: , right , cuz there were some speakers that did definite " 's " but right now we phd g: so , it 's actually probably good for us to know the difference between the real " and the one that 's just like " or transcribed " aaa " cuz in like in switchboard , you would see e all of these forms , but they all were like " . phd g: or the " uh " , " eh " , " ah " were all the same . and then , we have this additional non - native version of , like " eeh " . grad c: all the " eh " 's i ' ve seen have been like that . they ' ve been like " like that have bee has been transcribed to " eh " . and sometimes it 's stronger , grad d: i ' m just these poor transcribers , they 're gon na hate this meeting . phd g: but you 're a native german speaker so it 's not a issue for it 's only postdoc e: that makes sense . , and so , , th i have there are some , americans who are using this " too , and i have n't listened to it systematically , maybe with some of them , they 'd end up being " 's " but , i my spot - checking has made me think that we do have " in also , american e data represented here . but any case , that 's the this is reduced down from really quite a long a much longer list , postdoc e: functionally pretty , also it was fascinating , i was listening to some of these , i two nights ago , and it 's just hilarious to liste to do a search for the " - 's " . and you get " - " and diff everybody 's doing it . postdoc e: just i wanted to say i w think it would be fun to make a montage of it because there 's a " - . postdoc e: all these different vocal tracts , but it 's the same item . it 's very interesting . , then the acronyms y and the ones in parentheses are ones which the transcriber was n't of , and i have n't been able to listen to clarify , but you can see that the parenthesis convention makes it very easy to find them postdoc e: question mark is punctuation . so it they said that @ , " dc ? " postdoc e: , so the only , and i do have a stress marker here . sometimes the contrastive stress is showing up , and , professor f: i ' m , i got lost here . what - w what 's the difference between the parenthesized acronym and the non - parenthesized ? postdoc e: the parenthesized is something that the transcriber thought was ann , but was n't entirely . so i 'd need to go back or someone needs to go back , and say , yes or no , and then get rid of the parentheses . but the parentheses are used only in that context in the transcripts , of noti noticing that there 's something uncertain . grad d: i know ! i was saying that a lot of them are the networks meeting . postdoc e: nsa , a lot of these are coming from them . i listened to some of that . postdoc e: and robustness has a fair amount , but the nsa group is just very many . postdoc e: right and sometimes , you see a couple of these that are actually " ok 's " so it 's may be that they got to the point where it was low enough understandable understandability that they were n't entirely the person said " ok . " , so it is n't really necessarily a an undecipherable acronym , postdoc e: but just n needs to be double checked . now we get to the comments . this postdoc e: number of times out of the entire database , w except for that last thirty minutes i have n't checked yet . phd a: so what is the difference between " papers rustling " and " rustling papers " ? postdoc e: i 'd have to listen . i w i 'd like to standardize these down farther but , , to me that sounds equivalent . but , i ' m a little hesitant to collapse across categories unless i actually listen to them . phd g: this is exactly how people will prove that these meetings do differ because we 're recording , right ? phd g: y no normally you do n't go around saying , " now you ' ve said it six times . postdoc e: but did you notice that there were seven hundred and eighty five instances of " ok " ? grad c: is this after like did you do some replacements for all the different form of " ok " to this ? postdoc e: of " ok " , yes . so that 's the single existing convention for " ok " . grad c: although , what 's there 's one with a slash after it . that 's disturbing . postdoc e: i actually explicitly looked for that one , and that , i ' m not exactly about that . postdoc e: no , i looked for that , but that does n't actually exist . and it may be , i do n't i ca n't explain that . postdoc e: it 's the only pattern that has a slash after it , and it 's an epiphenomenon . grad d: so i 'll just i was just looking at the bottom of page three there , is that " to be " or " not to be " . postdoc e: there is th one y , no , that 's r that 's legitimate . so now , comments , you can see they 're listed again , same deal , with exhaustive listing of everything found in everything except for these final th thirty minutes . grad d: ok so , on some of these quals , are they really quals , or are they glosses ? so like there 's a " qual tcl " . postdoc e: tcl . where do you see that ? the reason is because w it was said " tickle " . grad d: i see , i see . so it 's not gloss . ok , i see . postdoc e: on the in the actual script in the actual transcript , i s i so this happens in the very first one . i actually wrote it as " tickle " . because we they did n't say " tcl " , they said " tickle " . and then , following that is " qual tcl " . phd g: lan ok - we ok it 's in the language model , w , but it so it 's the pronunciation model that has to have a pronunciation of " tickle " . phd g: what i meant is that there should be a pronunciation " tickle " for tcl as a word . and that word in the in , it stays in the language model wherever it was . you never would put " tickle " in the language model in that form , there 's actually a bunch of cases like this with people 's names and phd b: so how w there 'd be a problem for doing the language modeling then with our transcripts the way they are . phd g: yes . so th there 's a few cases like that where the , the word needs to be spelled out in a consistent way as it would appear in the language , but there 's not very many of these . tcl 's one of them . grad d: right , so y so , whoever 's creating the new models , will have to also go through the transcripts and change them synchronously . phd g: we have this there is this thing i was gon na talk to you about at some point about , what do we do with the dictionary as we 're up updating the dictionary , these changes have to be consistent with what 's in the like spelling people 's names and . if we make a spelling correction to their name , like someone had deborah tannen 's name mispelled , and since we know who that is , we could correct it , phd g: but we need to make we have the mispel if it does n't get corrected we have to have a pronunciation as a mispelled word in the dictionary . things like that . postdoc e: , now the tannen corre the spelling c change . , that 's what gets i picked those up in the frequency check . phd g: so if there 's things that get corrected before we get them , it 's not an issue , but if there 's things that , we change later , then we always have to keep our the dictionary up to date . and then , in the case of " tickle " i we would just have a , word " tcl " which phd g: which normally would be an acronym , " tcl " but just has another pronunciation . postdoc e: icsi is one of those that sometimes people pronounce and sometimes they say " icsi . " so , those that are l are listed in the acronyms , i actually know they were said as letters . the others , e those really do need to be listened to cuz i have n't been able to go to all the ic icsi things , professor f: don and i were just noticing , love this one over on page three , vocal gesture mimicking sound of screwing something into head to hold mike in place . grad d: a lot of these are me the " beep is said with a high pit high pitch and lengthening . " phd g: in the old because he was saying , " how many e 's do i have to allow for ? " postdoc e: that 's been changed . so , exactly , that 's where the lengthening comment c came in . postdoc e: because you see " beep " and it gets kicked out in the spelling , and it also gets kicked out in the , freq frequency listing . i have the there 're various things like " breathe " versus " breath " versus " inhale " and , hhh , i . they do n't have any implications for anything else so it 's like i ' m tempted to leave them for now an and it 's easy enough to find them when they 're in curly brackets . we can always get an exhaustive listing of these things and find them and change them . postdoc e: , but i do n't actually remember what it was . but that was eric did that . grad d: on the glosses for numbers , it seems like there are lots of different ways it 's being done . postdoc e: chuck led to a refinement here which is to add " nums " if these are parts of the read numbers . now you already that i had , in places where they had n't transcribed numbers , i put " numbers " in place of any numbers , but there are places where they , it th this convention came later an and at the very first digits task in some transcripts they actually transcribed numbers . and , d chuck pointed out that this is read speech , and it 's to have the option of ignoring it for certain other prob p , things . and that 's why there 's this other tag here which occurs a hundred and five or three hundred and five times right now which is just n " nums " by itself postdoc e: which means this is part of the numbers task . i may change it to " digits " . , i with the sed command you can really just change it however you want because it 's systematically encoded , ? have to think about what 's the best for the overall purposes , but in any case , " numbers " and " nums " are a part of this digits task thing . , now th then i have these numbers that have quotation marks around them . , i did n't want to put them in as gloss comments because then you get the substitution . and actually , th , the reason i b did it this way was because i initially started out with the other version , you have the numbers and you have the full form and the parentheses , however sometimes people stumble over these numbers they 're saying . so you say , " seve - seventy eight point two " , or whatever . and there 's no way of capturing that if you 're putting the numbers off to the side . you ca n't have the seven and postdoc e: the left is i so example the very first one , it would be , spelled out in words , " point five " . postdoc e: so i this is also spelled out in words . point five . and then , in here , " nums " , so it 's not going to be mistaken as a gloss . it comes out as " nums quote dot five " . grad d: ok now , the other example is , in the glosses right there , gloss one dash one three zero . what what 's to the left of that ? postdoc e: now in that case it 's people saying things like " one dash so - and - so " or they 're saying " two zero " whatever . and in that case , it 's part of the numbers task , and it 's not gon na be included in the read digits anyway , postdoc e: so i m in the there is . i ' ve added that all now too . grad c: there 's a " numbers " tag i ' m i did n't follow that last thing . postdoc e: so , so gloss in the same line that would have " gloss quote one dash one thirty " , you 'd have a gloss at the end of the line saying , " curly bracket nums curly bracket " . so if you did a , a " grep minus v nums " phd g: so that 's the so there would n't be something like i if somebody said something like , " boy , i ' m really tired , ok . " and then started reading that would be on a separate line ? ok great . cuz i was doing the " grep minus v " quick and dirty and looked like that was working ok , phd g: now why do we what 's the reason for having like the point five have the " nums " on it ? is that just like when they 're talking about their data ? postdoc e: these are all these , the " nums point " , this all where they 're saying " point " something or other . postdoc e: and the other thing too is for readability of the transcript . if you 're trying to follow this while you 're reading it 's really hard to read , " so in the data column five has " , " one point five compared to seventy nine point six " , it 's like when you see the words it 's really hard to follow the argument . and this is just really a way of someone who would handle th the data in a more discourse - y way to be able to follow what 's being said . postdoc e: where we 're gon na have a master file of the channelized data . , there will be scripts that are written to convert it into these t these main two uses and th some scripts will take it down th e into a f a for ta take it to a format that 's usable for the recognizer an , other scripts will take it to a form that 's usable for the for linguistics an and discourse analysis . and , the implication that i have is that th the master copy will stay unchanged . these will just be things that are generated , postdoc e: when things change then the script will cham change but the but there wo n't be stored copies of in different versions of things . phd g: so , i 'd have one request here which is just , maybe to make it more robust , th that the tag , whatever you would choose for this type of " nums " where it 's inside the spontaneous speech , is different than the tag that you use for the read speech . phd b: right . that would argue for changing the other ones to be " digits " . phd g: , that way w if we make a mistake parsing , we do n't see the " point five " , or it 's not there , then we a just an and actually for things like " seven eighths " , or people do fractions too i , you maybe you want one overall tag for that would be similar to that , phd g: or as long as they 're sep as they 're different strings that we that 'll make our p processing more robust . cuz we really will get rid of everything that has the " nums " string in it . phd b: i suppose what you could do is just make that you get rid of everything that has " curly brace nums curly brace " . postdoc e: that was that was my motivation . and i these can be changed , like i said . , as i said i was considering changing it to " digits " . and , it just i , it 's just a matter of deciding on whatever it is , and being the scripts know . phd g: it would probably be safer , if you 're willing , to have a separate tag just because , then we know for . and we can also do counts on them without having to do the processing . but you 're right , we could do it this way , it should work . phd g: but it 's probably not hard for a person to tell the difference because one 's in the context of a , a transcribed word string , postdoc e: the other thing is you can get really so minute with these things and increase the size of the files and the re and decrease the readability to such an extent by simply something like " percent " . now i could have adopted a similar convention for " percent " , but somehow percent is not so hard , ? i it 's just when you have these points and you 're trying to figure out where the decimal places are and we could always add it later . percent 's easy to detect . point however is a word that has a couple different meanings . and you 'll find both of those in one of these meetings , where he 's saying " the first point i wanna make is so - and - so " and he goes through four points , and also has all these decimals . phd b: , what does the sri recognizer output for things like that ? seven point five . does it output the word phd g: and and actually , the language it 's the same point , actually , the p , the word " to " and the word y th " going to " and " to go to " those are two different " to 's " and so there 's no distinction there . it 's just the word " point " has , every word has only one , e one version even if it 's a actually even like the word " read " and " read " those are two different words . they 're spelled the same way , right ? and they 're still gon na be transcribed as read . , i like the idea of having this in there , i was a little bit worried that , the tag for removing the read speech because i what if we have like " read letters " or , i , phd g: like " read something " like " read " but other than that i it sounds great . postdoc e: that , thilo requested , that we ge get some segments done by hand to e s reduce the size of the time bins wh like was chuc - chuck was mentioning earlier that , if you said , " and it was in part of a really long , s complex , overlapping segment , that the same start and end times would be held for that one as for the longer utterances , grad d: we did that for one meeting , right , so you have that data do n't you ? postdoc e: and he requested that there be , similar , samples done for five minute stretches c involving a variety of speakers and overlapping secti sections . he gave me he did the very , he did some shopping through the data and found segments that would be useful . and at this point , all four of the ones that he specified have been done . in addition the i ' ve i have the transcribers expanding the amount that they 're doing actually . postdoc e: so right now , i know that as of today we got an extra fifteen minutes of that type , and i ' m having them expand the realm on either side of these places where they ' ve already started . but if , and i and he 's gon na give me some more sections that he thinks would be useful for this purpose . because it 's true , if we could do the more fine grained tuning of this , using an algorithm , that would be so much more efficient . and , . so this is gon na be useful to expand this . phd a: so we sh perhaps we should try to start with those channelized versions just to try it . give it give one tr transcriber the channelized version of my speech - nonspeech detection and look if that 's helpful for them , or just let them try if that 's better or if they if they can postdoc e: that 'd be excellent . , that 'd be really great . as it stands we 're still in the phase of , cleaning up the existing data getting things , in i m more tight tightly time , aligned . i also wanna tell , i also wanted to r raise the issue that ok so , there 's this idea we 're gon na have this master copy of the transcript , it 's gon na be modified by scripts t into these two different functions . and actually the master postdoc e: two two or more . and that the master is gon na be the channelized version . so right now we ' ve taken this i initial one , it was a single channel the way it was input . and now , to the advances made in the interface , we can from now on use the channelized part , and , any changes that are made get made in the channelized version thing . but i wanted to get all the finished all the checks grad c: so , have those e the vis the ten hours that have been transcribed already , have those been channelized ? and i know i ' ve seen @ i ' ve seen they ' ve been channelized , grad c: have they been has the time have the time markings been adjusted , p on a per channel postdoc e: , for a total of like twenty m f for a total of let 's see , four times total of about an thirty minutes . that 's that 's been the case . grad c: i , i if we should talk about this now , or not , but i grad c: , i know . no , but my question is like should i until all of those are processed , and channelized , like the time markings are adjusted before i do all the processing , and we start like branching off into the our layer of transcripts . postdoc e: , the problem is that some of the adjustments that they 're making are to bring are to combine bins that were time bins which were previously separate . and the reason they do that is sometimes there 's a word that 's cut off . and so , i it 's true that it 's likely to be adjusted in the way that the words are more complete . grad c: no i know that adjusting those things are gon na is gon na make it better . postdoc e: so i it 's gon na be a more reliable thing and i ' m not grad c: i ' m about that , but do you have like a time frame when you can expect like all of it to be done , or when you expect them to finish it , or postdoc e: partly it depends on how , how e effective it will be to apply an algorithm because i this takes time , it takes a couple hours t to do , ten minutes . phd b: so right now the what you 're doing is you 're taking the , the o original version and you 're channelizing yourself , right ? grad c: . i ' m doing it myself . i if the time markings are n't different across channels , like the channelized version really does n't have any more information . so , i was just , originally i had done before like the channelized versions were coming out . phd b: so i th probably the way it 'll go is that , when we make this first general version and then start working on the script , that script @ that will be ma primarily come from what you ' ve done , we 'll need to work on a channelized version of those originals . phd b: and so it should be identical to what you have t except for the one that they ' ve already tightened the boundaries on . , and then probably what will happen is as the transcribers finish tightening more and more , that original version will get updated and then we 'll rerun the script and produce better versions . but the i the ef the effect for you guys , because you 're pulling out the little wave forms into separate ones , that would mean these boundaries are constantly changing you 'd have to constantly re rerun that , so , maybe phd g: the harder part is making that the transc the transcription so if you b merge two things , then that it 's the sum of the transcripts , but if you split inside something , you do n't where the word which words moved . and that 's wh that 's where it becomes a little bit , having to rerun the processing . the cutting of the waveforms is pretty trivial . grad c: as long as it can all be done automatically , then that 's not a concern . , if have to run three scripts to extract it all and let it run on my computer for an hour and a half , or however long it takes to parse and create all the reference file , that 's not a problem . so . as long as we 're at that point . and i know exactly like what the steps will work what 's going on , in the editing process , postdoc e: so that 's i could there were other checks that i did , but it 's that we ' ve unless you think there 's anything else , that i ' ve covered it . ###summary: the berkeley meeting recorder group discussed digits data , recent asr results , the status of transcriptions , and disk space and storage format issues. approximately two hours of digits have been recorded , half of which have been extracted. researchers doing asr are looking into methods for generating a better channel-based segmentation to improve recognition results for close-talking microphone data. transcription checking procedures were reviewed , and efforts to coordinate the channelization and presegmention of data with the tightening of time bins were discussed. the group also talked about downsampling and strategies for coping with low disk space. digits forms will instruct speakers to read digits separately and not as connected numbers. a tentative decision was made to collect overlapping digits from speakers. transcribers will be given channelized data that has been segmented for speech/non-speech boundaries to determine whether such pre-processing facilitates the transcription process. participants have been slow in returning the relevant forms necessary for matching digits data with speaker ids . forms containing digits that were written out in english were unsuccessful for obtaining desired prosodic groupings. acoustic adaptation is problematic for speakers who seldom talk during meetings. speaker overlap causes recognition errors. the lapel microphone is problematic as it picks up overlapping background speech. more disk space is needed. loading long waveforms in x waves is very time consuming. the dictionary must be updated with forms introduced as part of speaker fe008's set of transcription conventions. the ongoing tightening of boundaries on time bins causes segment boundaries to change , indicating potential problems for other ongoing processing tasks. a test set of digits data totalling two hours is nearly complete. digit extraction has been performed on roughly half of this data. future work may involve experimenting with the reading of digits in different prosodic groupings. preliminary asr results were discussed. subsequent efforts will involve generating a better channel-based segmentation , and possibly doing echo cancellation , to improve recognition on the close-talking microphone data. filename conventions are being standardized. transcription checking procedures have been formalized , including a spell check , producing an exhaustive list of forms identified in the data , attributing every utterance to the appropriate speaker id , glossing spoken forms with their full orthographic counterparts , e.g . 'cuz' and 'because' , transcribing acronyms , and encoding comments , i.e . glosses , vocalic and non-vocalic non-speech events , pragmatic cues , and the standardization of spoken forms , e.g . 'mm-hmm'. scripts will be used to convert a channelized master copy into forms that are appropriate for doing both recognition and linguistic analysis. coordinated efforts are in progress between the transcriber pool and speaker mn014 to fine-tune presegmented output to handle data from a variety of speakers and regions of speaker overlap.
14
professor d: ok . so , you can fill those out , after , actually , so so , i got , these results from , stephane . also , that , we might hear later today , about other results . s that , there were some other very good results that we 're gon na wanna compare to . but , r our results from other places , professor d: , i got this from you and then i sent a note to sunil about the cuz he has been running some other systems professor d: other than the icsi ogi one . so , i wan wanna see what that is . but , , so we 'll see what it is comparatively later . but it looks like , professor d: most of the time , even though it 's true that the overall number for danish we did n't improve it if you look at it individually , what it really says is that there 's , looks like out of the six cases , between the different kinds of , matching conditions out of the six cases , there 's , a couple where it stays about the same , three where it gets better , and one where it gets worse . , go ahead . phd a: y actually , , for the danish , there 's still some mystery because , when we use the straight features , we are not able to get these number with the icsi ogi one , . we do n't have this ninety - three seventy - eight , we have eight phd a: , so , that 's probably something wrong with the features that we get from ogi . , and sunil is working on trying to check everything . professor d: we have a little bit of time on that , actually . we have a day or so , when when do you folks leave ? professor d: sunday ? so until saturday midnight , we have w we have time , that would be good . that 'd be good . , and , i u when whenever anybody figures it out they should also , for , email hynek because hynek will be over there telling people what we did , so he should know . professor d: good , so , we 'll hold off on that a little bit . , even with these results as they are , it 's really not that bad . but but , and it looks like the overall result as they are now , even without , any bugs being fixed is that , on the other tasks , we had this average of , forty nine percent , or so , improvement . and here we have somewhat better than that than the danish , and somewhat worse than that on the german , but , it sounds like , one way or another , the methods that we 're doing can reduce the error rate from mel ceptrum down by , a fourth of them to , a half of them . somewhere in there , depending on the exact case . so that 's good . , that , one of the things that hynek was talking about was understanding what was in the other really good proposals and trying to see if what should ultimately be proposed is some , combination of things . , if , cuz there 's things that they are doing there that we certainly are not doing . and there 's things that we 're doing that they 're not doing . and and they all seem like good things . professor d: , first place , there 's still this thing to work out , and second place second thing is that the only results that we have so far from before were really development set results . professor d: so , in this community that 's of interest . it 's not like everything is being pinned on the evaluation set . but , for the development set , our best result was a little bit short of fifty percent . and the best result of any system was about fifty - four , where these numbers are the , relative , reduction in , word error rate . professor d: and , the other systems were , somewhat lower than that . there was actually there was much less of a huge range than there was in aurora one . in aurora one there were systems that ba did n't improve things . and here the worst system still reduced the error rate by thirty - three percent , in development set . professor d: so so , everybody is doing things between , roughly a third of the errors , and half the errors being eliminated , and varying on different test sets and . professor d: it 's probably a good time to look at what 's really going on and seeing if there 's a way to combine the best ideas while at the same time not blowing up the amount of , resources used , cuz that 's critical for this test . phd c: do we know anything about who 's was it that had the lowest on the dev set ? professor d: , the there were two systems that were put forth by a combination of , french telecom and alcatel . and , they differed in some respects , but they e one was called the french telecom alcatel system the other was called the alcatel french telecom system , which is the biggest difference , . but but there 're some other differences , too . , and , they both did very , ? so , my impression is they also did very on the , evaluation set , but , i we have n't seen you ' ve - you have n't seen any final results for that professor d: there is a couple pieces to it . there 's a spectral subtraction style piece it was , wiener filtering . and then there was some p some modification of the cepstral parameters , where they phd a: , actually , something that 's close to cepstral mean subtraction . but , the way the mean is adapted , it 's signal dependent . i ' m , so , the mean is adapted during speech and not during silence . but it 's very close to cepstral mean subtraction . professor d: but some people have done exactly that thing , of and the it 's not to to look in speech only , to try to m to measure these things during speech , that 's p that 's not that uncommon . but i it so it looks like they did some , reasonable things , and they 're not things that we did , precisely . we did unreasonable things , which because we like to try strange things , and our things worked too . and so , , it 's possible that some combination of these different things that were done would be the best thing to do . but the only caveat to that is that everybody 's being real conscious of how much memory and how much cpu they 're using because these , standards are supposed to go on cell phones with m moderate resources in both respects . professor d: everybody was focused elsewhere . , now , one of the things that 's about what we did is , we do have a , a filtering , which leads to a , a reduction in the bandwidth in the modulation spectrum , which allows us to downsample . so , as a result of that we have a reduced , transmission rate for the bits . that was misreported the first time out . it it said the same amount because for convenience sake in the particular way that this is being tested , they were repeating the packets . so it was they were s they had twenty - four hundred bits per second , but they were literally creating forty - eight hundred bits per second , even though y it was just repeated . professor d: , n , this was just a ph phoney thing just to fit into the software that was testing the errors channel errors and so on . so in reality , if you put this system in into , the field , it would be twenty - four hundred bits per second , not forty - eight hundred . , so that 's a feature of what we did . but , , we still have to see how it all comes out . , and then there 's the whole standards process , which is another thing altogether . phd c: when is the development set , the , test set results due ? like the day before you leave ? professor d: , probably the day after they leave , but we 'll have to stop it the day before we leave . professor d: and , they , right . and the , results are due like the day before the meeting . professor d: i th i think they are , so , since we have a bit farther to travel than some of the others , we 'll have to get done a little quicker . but , , it 's just tracing down these bugs . , just exactly this thing of , why these features seem to be behaving differently , in california than in oregon . might have something to do with electricity shortage . , we did n't have enough electrons here , but , , the main reason for having , it only takes w to run the two test sets in just in computer time is just a day or so , right ? professor d: so , the who the whole reason for having as long as we have , which was like a week and a half , is because of bugs like that . so , we 're gon na end up with these same sheets that have the percentages and so on just for the professor d: just with the missing columns filled in . , that 'll be good . so , i 'll dis i 'll disregard these numbers . that 's that 's good . phd a: so , hynek will try to push for trying to combine , different things ? or professor d: , that 's , the question is " is there is there some advantage ? " , you could just take the best system and say that 's the standard . but that if different systems are getting at good things , a again within the constraint of the resources , if there 's something simple that you can do now , it 's , very reasonable to have a standard for the terminal 's side and then for the server 's side say , " here 's a number of things that could be done . " so , everything that we did could probably just be added on to what alcatel did , and i it 'd probably work pretty with them , too . , that 's one aspect of it . and then on the terminal 's side , i how much , memory and cpu it takes , but it seems like the filtering , the vad they both had , and , so and they both had some on - line normalization , of sorts , so , it seems like the main different there is the , filtering . and the filtering if you can should n't take a lot of memory to do that , and i also would n't think the cpu , would be much either for that part . so , if you can add those in then , you can cut the data rate in half . so it seems like the right thing to do is to on the terminal 's side , take what they did , if it does seem to generalize to german and danish , take what they did add in a filter , and add in some on the server 's side and that 's probably a reasonable standard . phd a: they are working on this already ? because , su - sunil told me that he was trying already to put some , filtering in the france telecom . professor d: , so that 's what that would be ideal would be is that they could , they could actually show that , a combination of some sort , would work even better than what any of the systems had . and , then it would , be something to discuss in the meeting . but , not clear what will go on . , on the one hand , sometimes people are just anxious to get a standard out there . , you can always have another standard after that , but this process has gone on for a while on already and people might just wanna pick something and say , " ok , this is it . " and then , that 's a standard . , standards are always optional . it 's just that , if you disobey them , then you risk not being able to sell your product , or and people often work on new standards while an old standard is in place and so on . so it 's not final even if they declared a standard . the other hand , they might just say they just enough yet to declare a standard . you will be you will become experts on this and know more far more than me about the tha this particular standards process once you go to this meeting . be interested in hearing . so , i 'd be , interested in hearing , your thoughts now you 're almost done . , you 're done in the sense that , you may be able to get some new features from sunil , and we 'll re - run it . , but other than that , you 're done , so , i ' m interested in hearing your thoughts about where you think we should go from this . , we tried a lot of things in a hurry , and , if we can back off from this now and take our time with something , and not have doing things quickly be quite so much the constraint , what you think would be the best thing to do . phd a: , first , to really have a look at the speech from these databases because , we tried several thing , but we did not really look at what 's happening , and where is the noise , and professor d: it 's a novel idea . look at the data . or more generally , i , what is causing the degradation . phd a: actually , there is one thing that , generally we think that most of the errors are within phoneme classes , and so it could be interesting to see if it i do n't think it 's still true when we add noise , and so we have i the confusion ma the confusion matrices are very different when we have noise , and when it 's clean speech . and probably , there is much more between classes errors for noisy speech . phd a: and so , so perhaps we could have a large gain , just by looking at improving the , recognition , not of phonemes , but of phoneme classes , simply . and which is a s a simpler problem , perhaps , but which is perhaps important for noisy speech . professor d: the other thing that strikes me , just looking at these numbers is , just taking the best cases , some of these , even with all of our wonderful processing , still are horrible kinds of numbers . but just take the best case , the - matched , german case after er - matched danish after we the numbers we 're getting are about eight or nine p percent error per digit . this is not usable , professor d: , if you have ten digits for a phone number , every now and then you 'll get it right . , it 's , so , the other thing is that , and and a and also , part of what 's about this is that this is , a realistic almost realistic database . , it 's still not people who are really trying to accomplish something , but , within the artificial setup , it is n't noise artificially added , simulated , additive noise . it 's real noise condition . and , the training , i , is always done on the close talking phd a: , it 's they have all these data from the close mike and from the distant mike , from different driving condition , open window , closed window , and they take all of this and they take seventy percent , for training and thirty percent for testing . phd a: so , training is done on different conditions and different microphones , and testing also is done on different microphone and conditions . so , probably if we only take the close microphones , i the results should be much better than this . professor d: i see . , ok , that explains it partially . wha - what about i in so the phd a: but the driving conditions , the speed and the road , is different for training and testing , is that right ? and the last condition is close microphone for training and distant for testing . so s so professor d: so , so the high so the right so the highly mismatched case is in some sense a good model for what we ' ve been , typically talking about when we talk about additive noise in and so and i k it does correspond to a realistic situation in the sense that , people might really be trying to , call out telephone numbers or some like that , in their cars and they 're trying to connect to something . phd a: actually , it 's very close to clean speech training because , because the close microphone and noisy speech testing , professor d: and the - matched condition is what you might imagine that you might be able to approach , if that this is the application . you 're gon na record a bunch on people in cars and , and do these training . and then , when y you sell it to somebody , they will be a different person with a different car , and so on . so it 's this is a an optim somewhat optimistic view on it , so , the real thing is somewhere in between the two . , but professor d: right . right , it does n't work . so , in a way , that 's the dominant thing is that even , say on the development set that we saw , the numbers that , that alcatel was getting when choosing out the best single numbers , it was just , it was n't good enough for a real system . you you , so , we still have to do . , and , i so , looking at the data , where , what 's the what 's th what 's characteristic i e , that 's a good thing . does a any you have any thoughts about what else y you 're thinking that you did n't get to that you would like to do if you had more time ? phd e: , f a lot of thing . because we trying a lot of s thing , and we does n't work , we remove these . maybe we trying again with the articulatory feature . i exactly because we tried we some one experiment that does n't work . , forgot it , something i exactly because , tsk maybe do better some step the general , diagram . i exactly s to think what we can improve . professor d: , cuz a lot of time it 's true , there were a lot of times when we ' ve tried something and it did n't work right away , even though we had an intuition that there should be something there . and so then we would just stop it . and , one of the things i do n't remember the details on , but i remember at some point , when you were working with a second stream , and you tried a low - pass filtering to cepstrum , in some case you got professor d: , but it was an msg - like thing , but it was n't msg , you y in some case you got some little improvement , but it was , a small improvement , and it was a big added complication , so you dropped it . but , that was just one try , you just took one filter , threw it there , and it seems to me that , if that is an important idea , which , might be , that one could work at it for a while , as you 're saying . and , and you had , you had the multi - band things also , and , there was issue of that . , barry 's going to be , continuing working on multi - band things as . we were just talking about , some work that we 're interested in . inspired by the by larry saul with the , learning articulatory feature in , in the case of his paper with sonorance based on , multi - band information where you have a combination of gradient learning an and , em . so , that , this is a neat data set . , and then , as we mentioned before , we also have the new , digit set coming up from recordings in this room . so , there 's a lot of things to work with . and , what i like about it , in a way , is that , the results are still so terrible . , they 're much better than they were , . we 're talking about thirty to sixty percent , error rate reduction . that 's that 's really great to do that in relatively short time . but even after that it 's still , so poor that , no one could really use it . that 's great that because and y also because again , it 's not something sometimes we ' ve gotten terrible results by taking some data , and artificially , convolving it with some room response , we take a very , at one point , brian and i went downstairs into the basement where it was in a hallway where it was very reverberant and we made some recordings there . and then we , made a simulation of the room acoustics there and applied it to other things , but it was all pretty artificial , and , how often would you really try to have your most crucial conversations in this very reverberant hallway ? this is what 's about the aurora data and the data here , is that it 's a realistic room situation , acoustics acoustic situation , both terms in noise and reflections , and so on and n and , , with something that 's still relatively realistic , it 's still very hard to do very . phd a: so d actually , this is tha that 's why we , it 's a different data . we 're not we 're not used to work with this data . that 's why we should have a loo more closer look at what 's going on . so this would be the first thing , and then , try to , debug what was wrong , when we do aurora test on the msg particularly , and on the multi - band . professor d: no , there 's lots of good things to do with this . so let 's i you were gon na say something else ? , ok . what do you think ? phd c: about other experiments ? , now , i ' m interested in , looking at the experiments where you use , data from multiple languages to train the neural net . and i how far , or if you guys even had a chance to try that , but that would be some it 'd be interesting to me . phd a: again , it 's the thing that , we were thin thinking that it would work , but it did n't work . and , so there is not a bug , but something wrong in what we are doing , perhaps . , something wrong , perhaps in the just in the fact that the labels are what worked best is the hand - labeled data . , so , . i if we can get some hand - labeled data from other languages . it 's not so easy to find . but that would be something interesting t to see . professor d: also , , there was just the whole notion of having multiple nets that were trained on different data . so one form of different data was is from different languages , but the other , i , m in those experiments it was n't so much combining multiple nets , it was a single net that had different so , first thing is would it be better if they were multiple nets , for some reason ? second thing is , never mind the different languages , just having acoustic conditions rather than training them all up in one , would it be helpful to have different ones ? that was a question that was raised by mike shire 's thesis , and on in that case in terms of reverberation . right ? that that sometimes it might be better to do that . but , i do n't think we know for . so , next week , we , wo n't meet because you 'll be in europe . whe - when are you two getting back ? professor d: , that 's right . you ' ve got ta s have a saturday overnight , professor d: , . so , we 'll skip next week , and we 'll meet two weeks from now . and , i the main topic will be , you telling us what happened . , so , if we do n't have an anything else to discuss , we should , turn off the machine and then say the real nasty things . professor d: , digits ! good point . good thinking . why do n't you go ahead .
the berkley meeting recorder group discussed the most recent progress with their current project , a digit recognition system for use in cell phones. this included some discussion of results , comparing various other groups' systems , issues involving the set up , and plans for future work. results are required for an upcoming meeting , but since some group members will be away , results need to be in sooner. although error rates have been greatly reduced , current rates are still unusable in a practical situation. there is a problem replicating some results found by partner ogi , but it is unclear why. mn007 and fn002 have been working with the new danish and german databases , making improvements , though more so with results on the danish. the results are reasonable , but still not good enough. however , it has not been possible to compare results to the best , as still development system. there are a number of things that the group wishes to consider for looking for further improvements. there are various techniques that various groups have tried , and they should all be considered for possible combinations of systems. one suitable candidate for combination is the group's own filtering which reduces bandwidth to half bit transmission rate.
###dialogue: professor d: ok . so , you can fill those out , after , actually , so so , i got , these results from , stephane . also , that , we might hear later today , about other results . s that , there were some other very good results that we 're gon na wanna compare to . but , r our results from other places , professor d: , i got this from you and then i sent a note to sunil about the cuz he has been running some other systems professor d: other than the icsi ogi one . so , i wan wanna see what that is . but , , so we 'll see what it is comparatively later . but it looks like , professor d: most of the time , even though it 's true that the overall number for danish we did n't improve it if you look at it individually , what it really says is that there 's , looks like out of the six cases , between the different kinds of , matching conditions out of the six cases , there 's , a couple where it stays about the same , three where it gets better , and one where it gets worse . , go ahead . phd a: y actually , , for the danish , there 's still some mystery because , when we use the straight features , we are not able to get these number with the icsi ogi one , . we do n't have this ninety - three seventy - eight , we have eight phd a: , so , that 's probably something wrong with the features that we get from ogi . , and sunil is working on trying to check everything . professor d: we have a little bit of time on that , actually . we have a day or so , when when do you folks leave ? professor d: sunday ? so until saturday midnight , we have w we have time , that would be good . that 'd be good . , and , i u when whenever anybody figures it out they should also , for , email hynek because hynek will be over there telling people what we did , so he should know . professor d: good , so , we 'll hold off on that a little bit . , even with these results as they are , it 's really not that bad . but but , and it looks like the overall result as they are now , even without , any bugs being fixed is that , on the other tasks , we had this average of , forty nine percent , or so , improvement . and here we have somewhat better than that than the danish , and somewhat worse than that on the german , but , it sounds like , one way or another , the methods that we 're doing can reduce the error rate from mel ceptrum down by , a fourth of them to , a half of them . somewhere in there , depending on the exact case . so that 's good . , that , one of the things that hynek was talking about was understanding what was in the other really good proposals and trying to see if what should ultimately be proposed is some , combination of things . , if , cuz there 's things that they are doing there that we certainly are not doing . and there 's things that we 're doing that they 're not doing . and and they all seem like good things . professor d: , first place , there 's still this thing to work out , and second place second thing is that the only results that we have so far from before were really development set results . professor d: so , in this community that 's of interest . it 's not like everything is being pinned on the evaluation set . but , for the development set , our best result was a little bit short of fifty percent . and the best result of any system was about fifty - four , where these numbers are the , relative , reduction in , word error rate . professor d: and , the other systems were , somewhat lower than that . there was actually there was much less of a huge range than there was in aurora one . in aurora one there were systems that ba did n't improve things . and here the worst system still reduced the error rate by thirty - three percent , in development set . professor d: so so , everybody is doing things between , roughly a third of the errors , and half the errors being eliminated , and varying on different test sets and . professor d: it 's probably a good time to look at what 's really going on and seeing if there 's a way to combine the best ideas while at the same time not blowing up the amount of , resources used , cuz that 's critical for this test . phd c: do we know anything about who 's was it that had the lowest on the dev set ? professor d: , the there were two systems that were put forth by a combination of , french telecom and alcatel . and , they differed in some respects , but they e one was called the french telecom alcatel system the other was called the alcatel french telecom system , which is the biggest difference , . but but there 're some other differences , too . , and , they both did very , ? so , my impression is they also did very on the , evaluation set , but , i we have n't seen you ' ve - you have n't seen any final results for that professor d: there is a couple pieces to it . there 's a spectral subtraction style piece it was , wiener filtering . and then there was some p some modification of the cepstral parameters , where they phd a: , actually , something that 's close to cepstral mean subtraction . but , the way the mean is adapted , it 's signal dependent . i ' m , so , the mean is adapted during speech and not during silence . but it 's very close to cepstral mean subtraction . professor d: but some people have done exactly that thing , of and the it 's not to to look in speech only , to try to m to measure these things during speech , that 's p that 's not that uncommon . but i it so it looks like they did some , reasonable things , and they 're not things that we did , precisely . we did unreasonable things , which because we like to try strange things , and our things worked too . and so , , it 's possible that some combination of these different things that were done would be the best thing to do . but the only caveat to that is that everybody 's being real conscious of how much memory and how much cpu they 're using because these , standards are supposed to go on cell phones with m moderate resources in both respects . professor d: everybody was focused elsewhere . , now , one of the things that 's about what we did is , we do have a , a filtering , which leads to a , a reduction in the bandwidth in the modulation spectrum , which allows us to downsample . so , as a result of that we have a reduced , transmission rate for the bits . that was misreported the first time out . it it said the same amount because for convenience sake in the particular way that this is being tested , they were repeating the packets . so it was they were s they had twenty - four hundred bits per second , but they were literally creating forty - eight hundred bits per second , even though y it was just repeated . professor d: , n , this was just a ph phoney thing just to fit into the software that was testing the errors channel errors and so on . so in reality , if you put this system in into , the field , it would be twenty - four hundred bits per second , not forty - eight hundred . , so that 's a feature of what we did . but , , we still have to see how it all comes out . , and then there 's the whole standards process , which is another thing altogether . phd c: when is the development set , the , test set results due ? like the day before you leave ? professor d: , probably the day after they leave , but we 'll have to stop it the day before we leave . professor d: and , they , right . and the , results are due like the day before the meeting . professor d: i th i think they are , so , since we have a bit farther to travel than some of the others , we 'll have to get done a little quicker . but , , it 's just tracing down these bugs . , just exactly this thing of , why these features seem to be behaving differently , in california than in oregon . might have something to do with electricity shortage . , we did n't have enough electrons here , but , , the main reason for having , it only takes w to run the two test sets in just in computer time is just a day or so , right ? professor d: so , the who the whole reason for having as long as we have , which was like a week and a half , is because of bugs like that . so , we 're gon na end up with these same sheets that have the percentages and so on just for the professor d: just with the missing columns filled in . , that 'll be good . so , i 'll dis i 'll disregard these numbers . that 's that 's good . phd a: so , hynek will try to push for trying to combine , different things ? or professor d: , that 's , the question is " is there is there some advantage ? " , you could just take the best system and say that 's the standard . but that if different systems are getting at good things , a again within the constraint of the resources , if there 's something simple that you can do now , it 's , very reasonable to have a standard for the terminal 's side and then for the server 's side say , " here 's a number of things that could be done . " so , everything that we did could probably just be added on to what alcatel did , and i it 'd probably work pretty with them , too . , that 's one aspect of it . and then on the terminal 's side , i how much , memory and cpu it takes , but it seems like the filtering , the vad they both had , and , so and they both had some on - line normalization , of sorts , so , it seems like the main different there is the , filtering . and the filtering if you can should n't take a lot of memory to do that , and i also would n't think the cpu , would be much either for that part . so , if you can add those in then , you can cut the data rate in half . so it seems like the right thing to do is to on the terminal 's side , take what they did , if it does seem to generalize to german and danish , take what they did add in a filter , and add in some on the server 's side and that 's probably a reasonable standard . phd a: they are working on this already ? because , su - sunil told me that he was trying already to put some , filtering in the france telecom . professor d: , so that 's what that would be ideal would be is that they could , they could actually show that , a combination of some sort , would work even better than what any of the systems had . and , then it would , be something to discuss in the meeting . but , not clear what will go on . , on the one hand , sometimes people are just anxious to get a standard out there . , you can always have another standard after that , but this process has gone on for a while on already and people might just wanna pick something and say , " ok , this is it . " and then , that 's a standard . , standards are always optional . it 's just that , if you disobey them , then you risk not being able to sell your product , or and people often work on new standards while an old standard is in place and so on . so it 's not final even if they declared a standard . the other hand , they might just say they just enough yet to declare a standard . you will be you will become experts on this and know more far more than me about the tha this particular standards process once you go to this meeting . be interested in hearing . so , i 'd be , interested in hearing , your thoughts now you 're almost done . , you 're done in the sense that , you may be able to get some new features from sunil , and we 'll re - run it . , but other than that , you 're done , so , i ' m interested in hearing your thoughts about where you think we should go from this . , we tried a lot of things in a hurry , and , if we can back off from this now and take our time with something , and not have doing things quickly be quite so much the constraint , what you think would be the best thing to do . phd a: , first , to really have a look at the speech from these databases because , we tried several thing , but we did not really look at what 's happening , and where is the noise , and professor d: it 's a novel idea . look at the data . or more generally , i , what is causing the degradation . phd a: actually , there is one thing that , generally we think that most of the errors are within phoneme classes , and so it could be interesting to see if it i do n't think it 's still true when we add noise , and so we have i the confusion ma the confusion matrices are very different when we have noise , and when it 's clean speech . and probably , there is much more between classes errors for noisy speech . phd a: and so , so perhaps we could have a large gain , just by looking at improving the , recognition , not of phonemes , but of phoneme classes , simply . and which is a s a simpler problem , perhaps , but which is perhaps important for noisy speech . professor d: the other thing that strikes me , just looking at these numbers is , just taking the best cases , some of these , even with all of our wonderful processing , still are horrible kinds of numbers . but just take the best case , the - matched , german case after er - matched danish after we the numbers we 're getting are about eight or nine p percent error per digit . this is not usable , professor d: , if you have ten digits for a phone number , every now and then you 'll get it right . , it 's , so , the other thing is that , and and a and also , part of what 's about this is that this is , a realistic almost realistic database . , it 's still not people who are really trying to accomplish something , but , within the artificial setup , it is n't noise artificially added , simulated , additive noise . it 's real noise condition . and , the training , i , is always done on the close talking phd a: , it 's they have all these data from the close mike and from the distant mike , from different driving condition , open window , closed window , and they take all of this and they take seventy percent , for training and thirty percent for testing . phd a: so , training is done on different conditions and different microphones , and testing also is done on different microphone and conditions . so , probably if we only take the close microphones , i the results should be much better than this . professor d: i see . , ok , that explains it partially . wha - what about i in so the phd a: but the driving conditions , the speed and the road , is different for training and testing , is that right ? and the last condition is close microphone for training and distant for testing . so s so professor d: so , so the high so the right so the highly mismatched case is in some sense a good model for what we ' ve been , typically talking about when we talk about additive noise in and so and i k it does correspond to a realistic situation in the sense that , people might really be trying to , call out telephone numbers or some like that , in their cars and they 're trying to connect to something . phd a: actually , it 's very close to clean speech training because , because the close microphone and noisy speech testing , professor d: and the - matched condition is what you might imagine that you might be able to approach , if that this is the application . you 're gon na record a bunch on people in cars and , and do these training . and then , when y you sell it to somebody , they will be a different person with a different car , and so on . so it 's this is a an optim somewhat optimistic view on it , so , the real thing is somewhere in between the two . , but professor d: right . right , it does n't work . so , in a way , that 's the dominant thing is that even , say on the development set that we saw , the numbers that , that alcatel was getting when choosing out the best single numbers , it was just , it was n't good enough for a real system . you you , so , we still have to do . , and , i so , looking at the data , where , what 's the what 's th what 's characteristic i e , that 's a good thing . does a any you have any thoughts about what else y you 're thinking that you did n't get to that you would like to do if you had more time ? phd e: , f a lot of thing . because we trying a lot of s thing , and we does n't work , we remove these . maybe we trying again with the articulatory feature . i exactly because we tried we some one experiment that does n't work . , forgot it , something i exactly because , tsk maybe do better some step the general , diagram . i exactly s to think what we can improve . professor d: , cuz a lot of time it 's true , there were a lot of times when we ' ve tried something and it did n't work right away , even though we had an intuition that there should be something there . and so then we would just stop it . and , one of the things i do n't remember the details on , but i remember at some point , when you were working with a second stream , and you tried a low - pass filtering to cepstrum , in some case you got professor d: , but it was an msg - like thing , but it was n't msg , you y in some case you got some little improvement , but it was , a small improvement , and it was a big added complication , so you dropped it . but , that was just one try , you just took one filter , threw it there , and it seems to me that , if that is an important idea , which , might be , that one could work at it for a while , as you 're saying . and , and you had , you had the multi - band things also , and , there was issue of that . , barry 's going to be , continuing working on multi - band things as . we were just talking about , some work that we 're interested in . inspired by the by larry saul with the , learning articulatory feature in , in the case of his paper with sonorance based on , multi - band information where you have a combination of gradient learning an and , em . so , that , this is a neat data set . , and then , as we mentioned before , we also have the new , digit set coming up from recordings in this room . so , there 's a lot of things to work with . and , what i like about it , in a way , is that , the results are still so terrible . , they 're much better than they were , . we 're talking about thirty to sixty percent , error rate reduction . that 's that 's really great to do that in relatively short time . but even after that it 's still , so poor that , no one could really use it . that 's great that because and y also because again , it 's not something sometimes we ' ve gotten terrible results by taking some data , and artificially , convolving it with some room response , we take a very , at one point , brian and i went downstairs into the basement where it was in a hallway where it was very reverberant and we made some recordings there . and then we , made a simulation of the room acoustics there and applied it to other things , but it was all pretty artificial , and , how often would you really try to have your most crucial conversations in this very reverberant hallway ? this is what 's about the aurora data and the data here , is that it 's a realistic room situation , acoustics acoustic situation , both terms in noise and reflections , and so on and n and , , with something that 's still relatively realistic , it 's still very hard to do very . phd a: so d actually , this is tha that 's why we , it 's a different data . we 're not we 're not used to work with this data . that 's why we should have a loo more closer look at what 's going on . so this would be the first thing , and then , try to , debug what was wrong , when we do aurora test on the msg particularly , and on the multi - band . professor d: no , there 's lots of good things to do with this . so let 's i you were gon na say something else ? , ok . what do you think ? phd c: about other experiments ? , now , i ' m interested in , looking at the experiments where you use , data from multiple languages to train the neural net . and i how far , or if you guys even had a chance to try that , but that would be some it 'd be interesting to me . phd a: again , it 's the thing that , we were thin thinking that it would work , but it did n't work . and , so there is not a bug , but something wrong in what we are doing , perhaps . , something wrong , perhaps in the just in the fact that the labels are what worked best is the hand - labeled data . , so , . i if we can get some hand - labeled data from other languages . it 's not so easy to find . but that would be something interesting t to see . professor d: also , , there was just the whole notion of having multiple nets that were trained on different data . so one form of different data was is from different languages , but the other , i , m in those experiments it was n't so much combining multiple nets , it was a single net that had different so , first thing is would it be better if they were multiple nets , for some reason ? second thing is , never mind the different languages , just having acoustic conditions rather than training them all up in one , would it be helpful to have different ones ? that was a question that was raised by mike shire 's thesis , and on in that case in terms of reverberation . right ? that that sometimes it might be better to do that . but , i do n't think we know for . so , next week , we , wo n't meet because you 'll be in europe . whe - when are you two getting back ? professor d: , that 's right . you ' ve got ta s have a saturday overnight , professor d: , . so , we 'll skip next week , and we 'll meet two weeks from now . and , i the main topic will be , you telling us what happened . , so , if we do n't have an anything else to discuss , we should , turn off the machine and then say the real nasty things . professor d: , digits ! good point . good thinking . why do n't you go ahead . ###summary: the berkley meeting recorder group discussed the most recent progress with their current project , a digit recognition system for use in cell phones. this included some discussion of results , comparing various other groups' systems , issues involving the set up , and plans for future work. results are required for an upcoming meeting , but since some group members will be away , results need to be in sooner. although error rates have been greatly reduced , current rates are still unusable in a practical situation. there is a problem replicating some results found by partner ogi , but it is unclear why. mn007 and fn002 have been working with the new danish and german databases , making improvements , though more so with results on the danish. the results are reasonable , but still not good enough. however , it has not been possible to compare results to the best , as still development system. there are a number of things that the group wishes to consider for looking for further improvements. there are various techniques that various groups have tried , and they should all be considered for possible combinations of systems. one suitable candidate for combination is the group's own filtering which reduces bandwidth to half bit transmission rate.
26
professor a: ok , so had some interesting mail from dan ellis . actually , he redirected it to everybody also so the pda mikes have a big bunch of energy at five hertz where this came up was that i was showing off these wave forms that we have on the web and had n't noticed this , but that the major , major component in the wave in the second wave form in that pair of wave forms is actually the air conditioner . so . i have to be more careful about using that as a as a good illustration , it 's not , of the effects of room reverberation . it is is n't a bad illustration of the effects of room noise . on some mikes but and then we had this other discussion about whether this affects the dynamic range , cuz i know , although we start off with thirty two bits , you end up with sixteen bits and , are we getting hurt there ? but dan is pretty confident that we 're not , that quantization error is not is still not a significant factor there . so there was a question of whether we should change things here , whether we should change a capacitor on the input box for that or whether we should professor a: and the feeling was once we start monk monkeying with that , many other problems could ha happen . and additionally we already have a lot of data that 's been collected with that , so . a simple thing to do is he has a i forget if it this was in that mail or in the following mail , but he has a simple filter , a digital filter that he suggested . we just run over the data before we deal with it . professor a: the other thing that i the answer to , but when people are using feacalc here , whether they 're using it with the high - pass filter option or not . and i if anybody knows . professor a: but . so when we 're doing all these things using our software there is if it 's based on the rasta - plp program , which does both plp and rasta - plp then there is an option there which then comes up through to feacalc which allows you to do high - pass filtering and in general we like to do that , because of things like this and it 's pretty it 's not a very severe filter . does n't affect speech frequencies , even pretty low speech frequencies , but it 's professor a: something like that . there 's some effect above twenty but it 's it 's mild . so , it probably there 's probably some effect up to a hundred hertz but it 's pretty mild . i in the strut implementation of the is there a high - pass filter or a pre - emphasis in the professor a: so . we we want to go and check that in i for anything that we 're going to use the p d a mike for . he says that there 's a pretty good roll off in the pzm mikes so we do n't need to worry about them one way or the other but if we do make use of the cheap mikes , we want to be to do that filtering before we process it . and then again if it 's depending on the option that the our software is being run with , it 's quite possible that 's already being taken care of . but i also have to pick a different picture to show the effects of reverberation . professor a: but . it was since i was talking about reverberation and showing this thing that was noise , it was n't a good match , but it certainly was still an indication of the fact that you get noise with distant mikes . it 's just not a great example because not only is n't it reverberation but it 's a noise that we definitely to do . so , it does n't take deep a new bold new methods to get rid of five hertz noise , so it was a bad example in that way , but it 's it still is it 's the real thing that we did get out of the microphone at distance , so it was n't it w was n't wrong it was inappropriate . so , but , someone noticed it later pointed it out to me , and i went " , man . why did n't i notice that ? " . so we 'll change our picture on the web , when we 're @ . one of the things i was , i was trying to think about what 's the best way to show the difference an and i had a couple of thoughts one was , that spectrogram that we show is o k , but the eyes and the brain behind them are so good at picking out patterns from noise that in first glance you look at them it does n't seem like it 's that bad because there 's many features that are still preserved . so one thing to do might be to just take a piece of the spec of the spectrogram where you can see that something looks different , an and blow it up , and have that be the part that 's just to show as . professor a: i some things are going to be hurt . another , i was thinking of was taking some spectral slices , like we look at with the recognizer , and look at the spectrum or cepstrum that you get out of there , and the , the reverberation does make it does change that . and so maybe that would be more obvious . professor a: so it 's , at one point in time or twenty over twenty milliseconds , you have a spectrum or a cepstrum . that 's what i meant by a slice . phd b: you could just you could just throw up , the some mfcc feature vectors . , one from one , one from the other , and then , you can look and see how different the numbers are . phd f: , . , at first i had a remark why i am wondering why the pda is always so far . we are always meeting at the beginning of the table and the pda 's there . professor a: . i cuz we have n't wanted to move it . we we could move us , phd f: , anyway . , so . since the last meeting we ' ve tried to put together the clean low - pass downsampling , upsampling , the new filter that 's replacing the lda filters , and also the delay issue so that we considered th the delay issue on the for the on - line normalization . mmm . so we ' ve put together all this and then we have results that are not very impressive . , there is no real improvement . phd f: . actually it 's better . it seems better when we look at the mismatched case but we are like cheated here by the th this problem that in some cases when you modify slight slightly modify the initial condition you end up completely somewhere air somewhere else in the space , the parameters . so . the other system are . for italian is at seventy - eight percent recognition rate on the mismatch , and this new system has eighty - nine . but i do n't think it indicates something , really . i do n't think it means that the new system is more robust professor a: , the test would be if you then tried it on one of the other test sets , if it was phd f: but from this se seventy - eight percent recognition rate system , i could change the transition probabilities for the first hmm and it will end up to eighty - nine also . by using point five instead of point six , point four as in the htk script . so . that 's phd b: i looked at the results when stephane did that and it 's really wo really happens . phd b: th the only difference is you change the self - loop transition probability by a tenth of a percent and it causes ten percent difference in the word error rate . phd b: and n not tenth of a percent , one tenth , alright ? so from point five so from point six to point five and you get ten percent better . and it 's what you hypothesized in the last meeting about it just being very phd b: get stuck in some local minimum and this thing throws you out of it i . professor a: , what 's what are according to the rules what are we supposed to do about the transition probabilities ? are they supposed to be point five or point six ? phd b: you 're not allowed to that 's supposed to be point six , for the self - loop . phd b: but changing it to point five is which gives you much better results , but that 's not allowed . phd f: , but even if you use point five , i ' m not it will always give you the better results on other test set or it phd f: but . , the reason is , i not i it was in my mail also , is the fact that the mismatch is trained only on the far microphone . , in for the mismatched case everything is using the far microphone training and testing , whereas for the highly mismatched , training is done on the close microphone so it 's clean speech so you do n't have this problem of local minima probably and for the - match , it 's a mix of close microphone and distant microphone and phd b: somebody , it was morgan , suggested at the last meeting that i actually count to see how many parameters and how many frames . and there are almost one point eight million frames of training data and less than forty thousand parameters in the baseline system . so it 's very , very few parameters compared to how much training data . phd b: i did one quick experiment just to make i had everything worked out and f for most of the for for all of the digit models , they end up at three mixtures per state . and so did a quick experiment , where i changed it so it went to four and it did n't have a r any significant effect at the medium mismatch and high mismatch cases and it had it was just barely significant for the - matched better . so i ' m r gon na run that again but with many more mixtures per state . professor a: . cuz at forty thou you could have , easily four times as many parameters . phd b: and also just seeing what we saw in terms of the expected duration of the silence model ? when we did this tweaking of the self - loop ? the silence model expected duration was really different . and so in the case where it had a better score , the silence model expected duration was much longer . so it was like it was a better match . if we make a better silence model that will help a lot too for a lot of these cases so but one thing i wanted to check out before i increased the number of mixtures per state was in their default training script they do an initial set of three re - estimations and then they built the silence model and then they do seven iterations then the add mixtures and they do another seven then they add mixtures then they do a final set of seven and they quit . seven seems like a lot to me and it also makes the experiments go take a really long time to do one turn - around of the matched case takes like a day . and so in trying to run these experiments i notice , it 's difficult to find machines , compute the run on . and so one of the things i did was i compiled htk for the linux machines cuz we have this one from ibm that 's got like five processors in it ? and so now i ' m you can run on that and that really helps a lot because now we ' ve got , extra machines that we can use for compute . and if i ' m do running an experiment right now where i ' m changing the number of iterations ? from seven to three ? just to see how it affects the baseline system . and so if we can get away with just doing three , we can do many more experiments more quickly . and if it 's not a huge difference from running with seven iterations , , we should be able to get a lot more experiments done . and so . i 'll let what happens with that . but if we can , run all of these back - ends f with many fewer iterations and on linux boxes we should be able to get a lot more experimenting done . so i wanted to experiment with cutting down the number of iterations before i increased the number of gaussians . professor a: so , how 's it going on the so . you you did some things . they did n't improve things in a way that convinced you 'd substantially improved anything . but they 're not making things worse and we have reduced latency , phd f: but actually it seems to do a little bit worse for the - matched case and we just noticed that , actually the way the final score is computed is quite funny . it 's not a mean of word error rate . it 's not a weighted mean of word error rate , it 's a weighted mean of improvements . which means that actually the weight on the - matched is i what what happened is that if you have a small improvement or a small if on the - matched case it will have huge influence on the improvement compared to the reference because the reference system is quite good for the - ma - matched case also . phd b: so it weights the improvement on the - matched case really heavily compared to the improvement on the other cases ? phd f: no , but it 's the weighting of the improvement not of the error rate . phd b: , and it 's hard to improve on the best case , cuz it 's already so good , phd f: but what is that you can have a huge improvement on the h hmk 's , like five percent absolute , and this will not affect the final score almost this will almost not affect the final score because this improvement because the improvement relative to the baseline is small phd f: no , it 's compared to the word er it 's improvement on the word error rate , professor a: so if you have ten percent error and you get five percent absolute improvement then that 's fifty percent . ok . so what you 're saying then is that if it 's something that has a small word error rate , then a even a relatively small improvement on it , in absolute terms , will show up as quite large in this . is that what you 're saying ? yes . but that 's it 's the notion of relative improvement . word error rate . phd f: , but when we think about the weighting , which is point five , point three , point two , it 's on absolute on relative figures , not so when we look at this error rate professor a: that 's why i ' ve been saying we should be looking at word error rate and not at accuracies . it 's we probably should have standardized on that all the way through . it 's just professor a: but you 're but when you look at the numbers , your sense of the relative size of things is quite different . professor a: if you had ninety percent correct and five percent , five over ninety does n't look like it 's a big difference , but five over ten is big . so just when we were looking at a lot of numbers and getting sense of what was important . phd f: like , it 's difficult to say because again i ' m not i have the phd b: hey morgan ? do you remember that signif program that we used to use for testing signi ? is that still valid ? i ' ve been using that . phd b: i should find that new one . use my old one from ninety - two or whatever professor a: , i ' m it 's not that different but he was a little more rigorous , as i recall . phd f: right . so it 's around , like , point five . no , point six percent absolute on italian phd f: we start from ninety - four point sixty - four , and we go to ninety - four point o four . phd f: , no , i ' ve ninety - four . , the baseline , you mean . i do n't i ' m not talking about the baseline here . phd f: for finnish , we start to ninety - three point eight - four and we go to ninety - three point seventy - four . and for spanish we are we were at ninety - five point o five and we go to ninety - three - s point sixty one . professor a: ok , so we are getting hurt somewhat . and is that wh what do what piece you ' ve done several changes here . do what pie phd f: i it 's the filter . because nnn , we do n't have complete result , but the filter so the filter with the shorter delay hurts on italian - matched , which and , and the other things , like downsampling , upsampling , do n't seem to hurt and the new on - line normalization , neither . phd b: i ' m really confused about something . if we saw that making a small change like , a tenth , to the self - loop had a huge effect , can we really make any conclusions about differences in this ? phd f: so . there is first this thing , and then the , i computed the like , the confidence level on the different test sets . and for the - matched they are around point six percent . for the mismatched they are around like let 's say one point five percent . and for the - m hm they are also around one point five . professor a: but ok , so you these degradations you were talking about were on the - matched case . do the does the new filter make things better or worse for the other cases ? professor a: ok , so i the argument one might make is that , " , if you looked at one of these cases and you jiggle something and it changes then you 're not quite what to make of it . but when you look across a bunch of these and there 's some pattern , so h here 's all the if in all these different cases it never gets better , and there 's significant number of cases where it gets worse , then you 're probably hurting things , i would say . so at the very least that would be a reasonably prediction of what would happen with a different test set , that you 're not jiggling things with . so i the question is if you can do better than this . if you can if we can approximate the old numbers while still keeping the latency down . , so . what i was asking , though , is are what 's the level of communication with the o g i gang now , about this and phd f: , we are exchanging mail as soon as we have significant results . for the moment , they are working on integrating the spectral subtraction from ericsson . and so . we are working on our side on other things like also trying a sup spectral subtraction but of our own , another spectral substraction . so it 's ok . it 's going phd f: . for the moment they 're everybody 's quite there is this eurospeech deadline , so . phd f: . and . as soon as we have something that 's significant and that 's better than what was submitted , we will fix the system but we ' ve not discussed it this yet , professor a: sounds like a great idea but that he 's saying people are scrambling for a eurospeech deadline . but that 'll be , done in a week . so , maybe after this next one . phd f: we are we are trying to do something with the meeting recorder digits , and the good thing is that there is this first deadline , and , some people from ogi are working on a paper for this , but there is also the special session about th aurora which is which has an extended deadline . the deadline is in may . phd f: for th so f only for the experiments on aurora . so it 's good , professor a: that 's great ! it 's great . so we should definitely get something in for that . but on meeting digits , maybe there 's maybe . professor a: . so , that you could certainly start looking at the issue but it 's probably , on s from what stephane is saying , it 's unlikely to get active participation from the two sides until after they ' ve phd b: i could at least , i ' m going to be out next week but i could try to look into like this cvs over the web . that seems to be a very popular way of people distributing changes and over , multiple sites and things so maybe if figure out how do that easily and then pass the information on to everybody so that it 's , as easy to do as possible and people do n't it wo n't interfere with their regular work , then maybe that would be good . and we could use it for other things around here too . grad c: that 's . and if you 're interested in using cvs , i ' ve set it up here , phd b: i used it a long time ago but it 's been a while so maybe ask you some questions . grad c: so . i 'll be away tomorrow and monday but i 'll be back on tuesday or wednesday . professor a: dave , the other thing , actually , is this business about this wave form . maybe you and talk a little bit at some point about coming up with a better demonstration of the effects of reverberation for our web page , cuz the , actually the it made a good audio demonstration because when we could play that clip the really obvious difference is that you can hear two voices and in the second one and only hear professor a: no , it sound it sounds pretty reverberant , but you ca n't when you play it back in a room with a big room , nobody can hear that difference really . they hear that it 's lower amplitude and they hear there 's a second voice , professor a: that actually that makes for a perfectly good demo because that 's a real obvious thing , that you hear two voices . professor a: that 's ok . but for the visual , just , i 'd like to have , the spectrogram again , because you 're visual abilities as a human being are so good you can pick out , you look at the good one , you look at the cru the screwed up one , and you can see the features in it without trying to @ phd b: i noticed that in the pictures . " hey , th " i my initial thought was " this is not too bad ! " professor a: but you have to , if you look at it closely , you see " , here 's a place where this one has a big formant maj major formants here are moving quite a bit . " and then you look in the other one and they look practically flat . so you could that 's why i was thinking , in a section like that , you could take a look at just that part of the spectrogram and you could say " . this this really distorted it quite a bit . " phd b: the main thing that struck me in looking at those two spectrograms was the difference in the high frequencies . it looked like for the one that was farther away , it really everything was attenuated and that was the main visual thing that i noticed . professor a: but it 's so there are clearly are spectral effects . since you 're getting all this indirect energy , then a lot of it does have reduced high frequencies . but the other thing is the temporal courses of things really are changed , and we want to show that , in some obvious way . the reason i put the wave forms in there was because they do look quite different . and so " , this is good . " after after they were put in there i did n't really look at them anymore , cuz they were different . so i want something that has a is a more interesting explanation for why they 're different . grad c: so maybe we can just substitute one of these wave forms and then do some zoom in on the spectrogram on an interesting area . professor a: the other thing that we had in there that i did n't like was that the most obvious characteristic of the difference when you listen to it is that there 's a second voice , and the cuts that we have there actually do n't correspond to the full wave form . it 's just the first there was something where he was having some trouble getting so much in , or . i forget the reason behind it . but it 's the first six seconds of it and it 's in the seventh or eighth second where @ the second voice comes in . so we would like to actually see the voice coming in , too , since that 's the most obvious thing when you listen to it . phd f: i brought some i if some figures here . i start we started to work on spectral subtraction . and the preliminary results were very bad . so the thing that we did is just to add spectral subtraction before this , the wall process , which contains lda on - line normalization . and it hurts a lot . and so we started to look at things like this , which is , it 's so you have the c - zero parameters for one italian utterance . phd f: and i plotted this for two channels . channel zero is the close mic microphone , and channel one is the distant microphone . and it 's perfectly synchronized , and the sentence contain only one word , which is " due " and it ca n't clearly be seen . where where is it ? where is the word ? phd f: this is a plot of c - zero , when we do n't use spectral substraction , and when there is no on - line normalization . there is just some filtering with the lda and some downsampling , upsampling . phd b: c - zero is the close talking ? the close channel ? and s channel one is the phd f: so c - zero is very clean , actually . then when we apply mean normalization it looks like the second figure , though it is not . which is good . , the noise part is around zero and and then the third figure is what happens when we apply mean normalization and variance normalization . what we can clearly see is that on the speech portion the two channel come becomes very close , but also what happens on the noisy portion is that the variance of the noise is phd b: can i ask what does variance normalization do ? w what is the effect of that ? phd b: no . no , i understand what it is , but , what does it what 's what is professor a: , because everything if you have a system based on gaussians , everything is based on means and variances . so if there 's an overall reason , it 's like if you were doing image processing and in some of the pictures you were looking at , there was a lot of light and in some , there was low light , you would want to adjust for that in order to compare things . and the variance is just like the next moment , ? so what if one set of pictures was taken so that throughout the course it was went through daylight and night ten times , another time it went thr i is , how much vari professor a: or no . i a better example would be how much of the light was coming in from outside rather than artificial light . so if it was a lot if more was coming from outside , then there 'd be the bigger effect of the change in the so every mean every all of the parameters that you have , especially the variances , are going to be affected by the overall variance . and so , in principle , you if you remove that source , then , you can phd b: i see . ok . so would the major effect is that you 're gon na get is by normalizing the means , professor a: because , again , if you 're trying to distinguish between e and b if it just so happens that the e 's were a more , were recorded when the energy was larger , or the variation in it was larger , than with the b 's , then this will be give you some bias . so the it 's removing these sources of variability in the data that have nothing to do with the linguistic component . professor a: i is if if you have a good voice activity detector , is n't it gon na pull that out ? phd f: if they are good . what it shows is that , perhaps a good voice activity detector is good before on - line normalization and that 's what we ' ve already observed . but , voice activity detection is not an easy thing neither . phd b: but after you do this , after you do the variance normalization i , it seems like this would be a lot easier than this signal to work with . phd f: so . what i notice is that , while i prefer to look at the second figure than at the third one , because you clearly see where speech is . but the problem is that on the speech portion , channel zero and channel one are more different than when you use variance normalization where channel zero and channel one become closer . phd f: so , for i th that it perhaps it shows that the parameters that the voice activity detector should use have to use should be different than the parameter that have to be used for speech recognition . professor a: so you can do that by doing the voi voice activity detection . you also could do it by spect spectral subtraction before the variance normalization , phd f: , but it 's not clear , we . it 's just to the number that at that are here are recognition experiments on italian hm and mm with these two kinds of parameters . and , it 's better with variance normalization . professor a: so it does get better even though it looks ugly . but does this have the voice activity detection in it ? phd f: but the fact is that the voice activity detector does n't work on channel one . phd b: where at what stage is the voice activity detector applied ? is it applied here or a after the variance normalization ? phd f: it 's applied before variance normalization . so it 's a good thing , because i voice activity detection on this should could be worse . professor a: can i ask a , a top - level question , which is " if most of what the ogi folk are working with is trying to integrate this other spectral subtraction , why are we worrying about it ? " phd f: about ? spectral subtraction ? it 's just it 's another they are trying to u to use the ericsson and we 're trying to use something else . and . , and also to understand what happens because fff . when we do spectral subtraction , actually , that this is the two last figures . it seems that after spectral subtraction , speech is more emerging now than before . phd f: , the difference between the energy of the speech and the energy of the n spectral subtrac subtracted noise portion is larger . , if you compare the first figure to this one actually the scale is not the same , but if you look at the numbers you clearly see that the difference between the c - zero of the speech and c - zero of the noise portion is larger . but what happens is that after spectral subtraction , you also increase the variance of this of c - zero . and so if you apply variance normalization on this , it completely sc screw everything . . and what they did at ogi is just they do n't use on - line normalization , for the moment , on spectral subtraction as soon as they will try on - line normalization there will be a problem . so , we 're working on the same thing but with different system professor a: , i the intellectually it 's interesting to work on things th one way or the other but i ' m just wondering if on the list of things that there are to do , if there are things that we wo n't do because we ' ve got two groups doing the same thing . that 's just just asking . , it 's phd b: there also could be . maybe see a reason f for both working on it too if , if you work on something else and you 're waiting for them to give you spectral subtraction it 's hard to know whether the effects that you get from the other experiments you do will carry over once you then bring in their spectral subtraction module . so it 's almost like everything 's held up waiting for this one thing . i if that 's true or not , but i could see how professor a: i . , we still evidently have a latency reduction plan which is n't quite what you 'd like it to be . that that seems like one prominent thing . and then were n't issues of having a second stream ? that was was it there was this business that , we could use up the full forty - eight hundred bits , phd f: . but they ' we want to work on this . they also want to work on this , we we will try msg , but , and they are t they want to work on the second stream also , but more with some multi - band or , what they call trap or generalized trap . so . professor a: in june . , the other thing is that you saw that mail about the vad v a ds performing quite differently ? that that this there was this experiment of " what if we just take the baseline ? " set of features , just mel cepstra , and you inc incorporate the different v a and it looks like the french vad is actually better significantly better . phd f: but i which vad they use . if the use the small vad i th it 's on it 's easy to do better because it does n't work . i which one . it 's pratibha that did this experiment . we should ask which vad she used . phd d: i do n't @ . he actually , that he say with the good vad of from ogi and with the alcatel vad . and the experiment was sometime better , sometime worse . phd f: but i it 's you were talking about the other mail that used vad on the reference features . professor a: it was just better . it was enough better that it would account for a fair amount of the difference between our performance , actually . so if they have a better one , we should use it . it 's you ca n't work on everything . . phd f: , so we should find out if it 's really better . if it the compared to the small or the big network . and perhaps we can easily improve if we put like mean normalization before the vad . because as you ' ve mentioned . professor a: h hynek will be back in town the week after next , back in the country . and start organizing more visits and connections and , working towards june . phd d: also is stephane was thinking that maybe it was useful to f to think about voiced - unvoiced to work here in voiced - unvoiced detection . and we are looking in the signal . phd f: , my feeling is that actually when we look the proposals , ev everybody is still using some spectral envelope and it 's phd f: , not pitch , but to look at the fine at the high re high resolution spectrum . phd f: so . we do n't necessarily want to find the pitch of the sound cuz i have a feeling that when we look at the just at the envelope there is no way you can tell if it 's voiced and unvoiced , if there is some it 's it 's easy in clean speech because voiced sound are more low frequency and . so there would be more , there is the first formant , which is the larger and then voiced sound are more high frequencies cuz it 's frication when you have noise there is no if you have a low frequency noise it could be taken for voiced speech phd b: , i was just gon na say is n't there are n't there lots of ideas for doing voice activity , or speech - nonspeech rather , by looking at , i harmonics or looking across time professor a: , he was talking about the voiced - unvoiced , though , so , not the speech - nonspeech . phd b: even with e w , even with the voiced - non voiced - unvoiced that you or somebody was talking about professor a: . b we should let him finish what he w he was gon na say , phd f: , so , if we try to develop a second stream , there would be one stream that is the envelope and the second , it could be interesting to have that 's something that 's more related to the fine structure of the spectrum . , so i . we were thinking about like using ideas from larry saul , have a good voice detector , have a good , voiced - speech detector , that 's working on the fft phd f: larry saul could be an idea . we were are thinking about just taking the spectrum and computing the variance of the high resolution spectrum and things like this . professor a: so u s u so many tell you something about that . we had a guy here some years ago who did some work on making use of voicing information to help in reducing the noise . so what he was doing is y you do estimate the pitch . and you from that you estimate or you estimate fine harmonic structure , whichev ei either way , it 's more or less the same . but that you then can get rid of things that are not i if there is strong harmonic structure , you can throw away that 's non - harmonic . and that is another way of getting rid of part of the noise so that 's something that is finer , brings in a little more information than just spectral subtraction . and he had some , he did that in combination with rasta . it was like rasta was taking care of convolutional and he was and got some decent results doing that . so that 's another way . but , there 's there 's all these cues . we ' ve actually back when chuck was here we did some voiced - unvoiced classification using a bunch of these , professor a: works ok . it 's not perfect but that you ca n't given the constraints of this task , we ca n't , in a very way , feed forward to the recognizer the information the probabilistic information that you might get about whether it 's voiced or unvoiced , where w we ca n't affect the distributions or anything . but we what we i we could phd b: did n't the head dude send around that message ? , you sent us all a copy of the message , where he was saying that i ' m not , exactly , what the gist of what he was saying , but something having to do with the voice activity detector and that it will that people should n't put their own in . it was gon na be a professor a: i what you could do , maybe this would be w useful , if you have if you view the second stream , before you do klt 's and , if you do view it as probabilities , and if it 's an independent so , if it 's not so much envelope - based by fine - structure - based , looking at harmonicity like that , if you get a probability from that information and then multiply it by , multiply by all the voiced outputs and all the unvoiced outputs , then use that as the take the log of that or pre - nonlinearity , professor a: and do the klt on the on that , then that would i be a reasonable use of independent information . so maybe that 's what you meant . and then that would be phd f: , i was not thinking this , this could be an so you mean have some probability for the v the voicing professor a: it could be pretty small . if you have a tandem system and then you have some it can be pretty small net we used we d did some of this . i did , some years ago , and the and you use to use information primarily that 's different as you say , it 's more fine - structure - based than envelope - based so then it you can guarantee it 's that you 're not looking at very with the other one , and then you only use for this one distinction . professor a: and and so now you ' ve got a probability of the cases , and you ' ve got the probability of the finer categories on the other side . you multiply them where appropriate professor a: if they really are from independent information sources then they should have different kinds of errors and roughly independent errors , and it 's a good choice for , that 's a good idea . phd f: because , , spectral subtraction is good and we could u we could use the fine structure to have a better estimate of the noise but still there is this issue with spectral subtraction that it seems to increase the variance of it 's this musical noise which is annoying if you d you do some on - line normalization after . spectral subtraction and on - line normalization do n't seem to go together very . professor a: or if you do a spectral subtraction do some spectral subtraction first and then do some on - line normalization then do some more spectral subtraction , maybe you can do it layers so it does n't hurt too much . but it but , anyway i was arguing against myself there by giving that example cuz i was already suggesting that we should be careful about not spending too much time on exactly what they 're doing if you get if you go into a harmonics - related thing it 's definitely going to be different than what they 're doing should have some interesting properties in noise . i know that when have people have done the obvious thing of taking your feature vector and adding in some variables which are pitch related or that it has n't my impression it has n't particularly helped . has not . professor a: but that 's a question for this extending the feature vector versus having different streams . professor a: and and it may not have been noisy conditions . i do n't remember the example but it was on some darpa data and some years ago and so it probably was n't , actually phd f: but we were thinking , we discussed with barry about this , and perhaps thinking we were thinking about some sheet cheating experiment where we would use timit and see if giving the d , this voicing bit would help in terms of frame classification . professor a: why do n't you why do n't you just do it with aurora ? just any i in each frame phd f: . cuz we do n't have , for italian perhaps we have , but we do n't have this labeling for aurora . we just have a labeling with word models but not for phonemes . professor a: but you could you can align so that it 's not perfect , but if you if what was said phd b: but the problem is that their models are all word level models . so there 's no phone models that you get alignments for . you so you could find out where the word boundaries are but that 's about it . grad e: s but we could use the noisy version that timit , which , is similar to the noises found in the ti - digits portion of aurora . phd f: noise , . , that 's right , . , i we can say that it will help , but i . if this voicing bit does n't help , we do n't have to work more about this because . it 's just to know if it how much i it will help and to have an idea of how much we can gain . professor a: in experiments that we did a long time ago and different ta it was probably resource management , you were getting something like still eight or nine percent error on the voicing , as i recall . and , so professor a: what that said is that , left to its own devices , like without the a strong language model and , that you would make significant number of errors just with your probabilistic machinery in deciding phd b: , the though there was one problem with that in that , we used canonical mapping so our truth may not have really been true to the acoustics . professor a: back twenty years ago when i did this voiced - unvoiced , we were getting more like ninety - seven or ninety - eight percent correct in voicing . but that was speaker - dependent actually . we were doing training on a particular announcer and getting a very good handle on the features . and we did this complex feature selection thing where we looked the different possible features one could have for voicing and and exhaustively searched all size subsets and for that particular speaker and you 'd find the five or six features which really did on them . and then doing all of that we could get down to two or three percent error . but that , again , was speaker - dependent with lots of feature selection and a very complex thing . so i would believe that it was quite likely that looking at envelope only , that we 'd be significantly worse than that . phd f: and the all the speechcorders ? what 's the idea behind ? cuz they have to , they do n't even have to detect voiced spe speech ? professor a: they do analysis - by - synthesis . they try they try every possible excitation they have in their code book and find the one that matches best . phd b: can mention one other interesting thing ? . one of the ideas that we had come up with last week for things to try to improve the system actually i s we did n't i wrote this in after the meeting b but the thought i had was looking at the language model that 's used in the htk recognizer , which is just a big loop , phd b: so you it goes " digit " and then that can be either go to silence or go to another digit , which that model would allow for the production of infinitely long sequences of digits , so . " i ' m gon na just look at the what actual digit strings do occur in the training data . " and the interesting thing was it turns out that there are no sequences of two - long or three - long digit strings in any of the aurora training data . so it 's either one , four , five , six , up to eleven , and then it skips and then there 's some at sixteen . phd b: but thought that was a little odd , that there were no two or three long so for the heck of it , i made a little grammar which , had it 's separate path for each length digit string you could get . so there was a one - long path and there was a four - long and a five - long and i tried that and it got way worse . there were lots of deletions . so it was , i did n't have any weights of these paths or i did n't have anything like that . and i played with tweaking the word transition penalties a bunch , but i could n't go anywhere . but . " if i only allow " , i should have looked at to see how often there was a mistake where a two - long or a three - long path was actually put out as a hypothesis . but . so to do that right you 'd probably want to have allow for them all but then have weightings and things . so . thought that was a interesting thing about the data .
the meeting recorder group at berkeley met to discuss recent progress. of greatest interest was the progress on improving the latency and performance of their recogniser. there was also concern over overlap of work with partners ogi , and a lack of a good example of room reverberation for demonstrations. everyone must be sure and use the high-pass filtering option on the groups software , to deal with irregularities between mics. in order to coordinate better with ogi , some sort of source code control is required and me018 has offered to investigate , but only minimal progress can be made until after the upcoming deadline for eurospeech. when he returns me026 will help. also , in two weeks one of the ogi members will return , and meetings should be arranged with him before the next big project meeting. ogi seem to be having some good results with voice activation detection , so the group need to find out which is the best vad and start using it. the is a waveform example of room reverberation on the groups website that was used in a presentation. it turns out that it is a good example of many things , but not the reverb it is supposed to contain. need to find a better example , maybe by just looking at a closer section of waveform. minor experimenting found that by dropping the self-loop transition in the hmms by just 0.1% can increase performance by 10% , but the rules of the task forbid this change. there is some confusion over what the results produced mean , since it appears they are weighted , which biases improvements in some cases quite heavily. speaker me013 is worried that his groups work on spectral subtraction overlaps with that of ogi , and that it may be time better spent on other tasks. speaker mn007 and fn002 have been working on improving the recogniser performance as well as reducing it's latency. work on new filters has reduced latency , but made no improvement , though a slight reduction in performance occurred in the well matched case. also tried adding some spectral subtraction , but it doesn't work will with on-line normalization and at this stage is just hurting results. they have also been considering the possibility of using a second stream of data looking at the voicedness of the data , which would draw some ideas from previous work. me018 has been looking at the baseline system and feels it may be possible to decrease the run time of experiments by decreasing the iteration , and he has also got the five processor linux machine capable of running htks.
###dialogue: professor a: ok , so had some interesting mail from dan ellis . actually , he redirected it to everybody also so the pda mikes have a big bunch of energy at five hertz where this came up was that i was showing off these wave forms that we have on the web and had n't noticed this , but that the major , major component in the wave in the second wave form in that pair of wave forms is actually the air conditioner . so . i have to be more careful about using that as a as a good illustration , it 's not , of the effects of room reverberation . it is is n't a bad illustration of the effects of room noise . on some mikes but and then we had this other discussion about whether this affects the dynamic range , cuz i know , although we start off with thirty two bits , you end up with sixteen bits and , are we getting hurt there ? but dan is pretty confident that we 're not , that quantization error is not is still not a significant factor there . so there was a question of whether we should change things here , whether we should change a capacitor on the input box for that or whether we should professor a: and the feeling was once we start monk monkeying with that , many other problems could ha happen . and additionally we already have a lot of data that 's been collected with that , so . a simple thing to do is he has a i forget if it this was in that mail or in the following mail , but he has a simple filter , a digital filter that he suggested . we just run over the data before we deal with it . professor a: the other thing that i the answer to , but when people are using feacalc here , whether they 're using it with the high - pass filter option or not . and i if anybody knows . professor a: but . so when we 're doing all these things using our software there is if it 's based on the rasta - plp program , which does both plp and rasta - plp then there is an option there which then comes up through to feacalc which allows you to do high - pass filtering and in general we like to do that , because of things like this and it 's pretty it 's not a very severe filter . does n't affect speech frequencies , even pretty low speech frequencies , but it 's professor a: something like that . there 's some effect above twenty but it 's it 's mild . so , it probably there 's probably some effect up to a hundred hertz but it 's pretty mild . i in the strut implementation of the is there a high - pass filter or a pre - emphasis in the professor a: so . we we want to go and check that in i for anything that we 're going to use the p d a mike for . he says that there 's a pretty good roll off in the pzm mikes so we do n't need to worry about them one way or the other but if we do make use of the cheap mikes , we want to be to do that filtering before we process it . and then again if it 's depending on the option that the our software is being run with , it 's quite possible that 's already being taken care of . but i also have to pick a different picture to show the effects of reverberation . professor a: but . it was since i was talking about reverberation and showing this thing that was noise , it was n't a good match , but it certainly was still an indication of the fact that you get noise with distant mikes . it 's just not a great example because not only is n't it reverberation but it 's a noise that we definitely to do . so , it does n't take deep a new bold new methods to get rid of five hertz noise , so it was a bad example in that way , but it 's it still is it 's the real thing that we did get out of the microphone at distance , so it was n't it w was n't wrong it was inappropriate . so , but , someone noticed it later pointed it out to me , and i went " , man . why did n't i notice that ? " . so we 'll change our picture on the web , when we 're @ . one of the things i was , i was trying to think about what 's the best way to show the difference an and i had a couple of thoughts one was , that spectrogram that we show is o k , but the eyes and the brain behind them are so good at picking out patterns from noise that in first glance you look at them it does n't seem like it 's that bad because there 's many features that are still preserved . so one thing to do might be to just take a piece of the spec of the spectrogram where you can see that something looks different , an and blow it up , and have that be the part that 's just to show as . professor a: i some things are going to be hurt . another , i was thinking of was taking some spectral slices , like we look at with the recognizer , and look at the spectrum or cepstrum that you get out of there , and the , the reverberation does make it does change that . and so maybe that would be more obvious . professor a: so it 's , at one point in time or twenty over twenty milliseconds , you have a spectrum or a cepstrum . that 's what i meant by a slice . phd b: you could just you could just throw up , the some mfcc feature vectors . , one from one , one from the other , and then , you can look and see how different the numbers are . phd f: , . , at first i had a remark why i am wondering why the pda is always so far . we are always meeting at the beginning of the table and the pda 's there . professor a: . i cuz we have n't wanted to move it . we we could move us , phd f: , anyway . , so . since the last meeting we ' ve tried to put together the clean low - pass downsampling , upsampling , the new filter that 's replacing the lda filters , and also the delay issue so that we considered th the delay issue on the for the on - line normalization . mmm . so we ' ve put together all this and then we have results that are not very impressive . , there is no real improvement . phd f: . actually it 's better . it seems better when we look at the mismatched case but we are like cheated here by the th this problem that in some cases when you modify slight slightly modify the initial condition you end up completely somewhere air somewhere else in the space , the parameters . so . the other system are . for italian is at seventy - eight percent recognition rate on the mismatch , and this new system has eighty - nine . but i do n't think it indicates something , really . i do n't think it means that the new system is more robust professor a: , the test would be if you then tried it on one of the other test sets , if it was phd f: but from this se seventy - eight percent recognition rate system , i could change the transition probabilities for the first hmm and it will end up to eighty - nine also . by using point five instead of point six , point four as in the htk script . so . that 's phd b: i looked at the results when stephane did that and it 's really wo really happens . phd b: th the only difference is you change the self - loop transition probability by a tenth of a percent and it causes ten percent difference in the word error rate . phd b: and n not tenth of a percent , one tenth , alright ? so from point five so from point six to point five and you get ten percent better . and it 's what you hypothesized in the last meeting about it just being very phd b: get stuck in some local minimum and this thing throws you out of it i . professor a: , what 's what are according to the rules what are we supposed to do about the transition probabilities ? are they supposed to be point five or point six ? phd b: you 're not allowed to that 's supposed to be point six , for the self - loop . phd b: but changing it to point five is which gives you much better results , but that 's not allowed . phd f: , but even if you use point five , i ' m not it will always give you the better results on other test set or it phd f: but . , the reason is , i not i it was in my mail also , is the fact that the mismatch is trained only on the far microphone . , in for the mismatched case everything is using the far microphone training and testing , whereas for the highly mismatched , training is done on the close microphone so it 's clean speech so you do n't have this problem of local minima probably and for the - match , it 's a mix of close microphone and distant microphone and phd b: somebody , it was morgan , suggested at the last meeting that i actually count to see how many parameters and how many frames . and there are almost one point eight million frames of training data and less than forty thousand parameters in the baseline system . so it 's very , very few parameters compared to how much training data . phd b: i did one quick experiment just to make i had everything worked out and f for most of the for for all of the digit models , they end up at three mixtures per state . and so did a quick experiment , where i changed it so it went to four and it did n't have a r any significant effect at the medium mismatch and high mismatch cases and it had it was just barely significant for the - matched better . so i ' m r gon na run that again but with many more mixtures per state . professor a: . cuz at forty thou you could have , easily four times as many parameters . phd b: and also just seeing what we saw in terms of the expected duration of the silence model ? when we did this tweaking of the self - loop ? the silence model expected duration was really different . and so in the case where it had a better score , the silence model expected duration was much longer . so it was like it was a better match . if we make a better silence model that will help a lot too for a lot of these cases so but one thing i wanted to check out before i increased the number of mixtures per state was in their default training script they do an initial set of three re - estimations and then they built the silence model and then they do seven iterations then the add mixtures and they do another seven then they add mixtures then they do a final set of seven and they quit . seven seems like a lot to me and it also makes the experiments go take a really long time to do one turn - around of the matched case takes like a day . and so in trying to run these experiments i notice , it 's difficult to find machines , compute the run on . and so one of the things i did was i compiled htk for the linux machines cuz we have this one from ibm that 's got like five processors in it ? and so now i ' m you can run on that and that really helps a lot because now we ' ve got , extra machines that we can use for compute . and if i ' m do running an experiment right now where i ' m changing the number of iterations ? from seven to three ? just to see how it affects the baseline system . and so if we can get away with just doing three , we can do many more experiments more quickly . and if it 's not a huge difference from running with seven iterations , , we should be able to get a lot more experiments done . and so . i 'll let what happens with that . but if we can , run all of these back - ends f with many fewer iterations and on linux boxes we should be able to get a lot more experimenting done . so i wanted to experiment with cutting down the number of iterations before i increased the number of gaussians . professor a: so , how 's it going on the so . you you did some things . they did n't improve things in a way that convinced you 'd substantially improved anything . but they 're not making things worse and we have reduced latency , phd f: but actually it seems to do a little bit worse for the - matched case and we just noticed that , actually the way the final score is computed is quite funny . it 's not a mean of word error rate . it 's not a weighted mean of word error rate , it 's a weighted mean of improvements . which means that actually the weight on the - matched is i what what happened is that if you have a small improvement or a small if on the - matched case it will have huge influence on the improvement compared to the reference because the reference system is quite good for the - ma - matched case also . phd b: so it weights the improvement on the - matched case really heavily compared to the improvement on the other cases ? phd f: no , but it 's the weighting of the improvement not of the error rate . phd b: , and it 's hard to improve on the best case , cuz it 's already so good , phd f: but what is that you can have a huge improvement on the h hmk 's , like five percent absolute , and this will not affect the final score almost this will almost not affect the final score because this improvement because the improvement relative to the baseline is small phd f: no , it 's compared to the word er it 's improvement on the word error rate , professor a: so if you have ten percent error and you get five percent absolute improvement then that 's fifty percent . ok . so what you 're saying then is that if it 's something that has a small word error rate , then a even a relatively small improvement on it , in absolute terms , will show up as quite large in this . is that what you 're saying ? yes . but that 's it 's the notion of relative improvement . word error rate . phd f: , but when we think about the weighting , which is point five , point three , point two , it 's on absolute on relative figures , not so when we look at this error rate professor a: that 's why i ' ve been saying we should be looking at word error rate and not at accuracies . it 's we probably should have standardized on that all the way through . it 's just professor a: but you 're but when you look at the numbers , your sense of the relative size of things is quite different . professor a: if you had ninety percent correct and five percent , five over ninety does n't look like it 's a big difference , but five over ten is big . so just when we were looking at a lot of numbers and getting sense of what was important . phd f: like , it 's difficult to say because again i ' m not i have the phd b: hey morgan ? do you remember that signif program that we used to use for testing signi ? is that still valid ? i ' ve been using that . phd b: i should find that new one . use my old one from ninety - two or whatever professor a: , i ' m it 's not that different but he was a little more rigorous , as i recall . phd f: right . so it 's around , like , point five . no , point six percent absolute on italian phd f: we start from ninety - four point sixty - four , and we go to ninety - four point o four . phd f: , no , i ' ve ninety - four . , the baseline , you mean . i do n't i ' m not talking about the baseline here . phd f: for finnish , we start to ninety - three point eight - four and we go to ninety - three point seventy - four . and for spanish we are we were at ninety - five point o five and we go to ninety - three - s point sixty one . professor a: ok , so we are getting hurt somewhat . and is that wh what do what piece you ' ve done several changes here . do what pie phd f: i it 's the filter . because nnn , we do n't have complete result , but the filter so the filter with the shorter delay hurts on italian - matched , which and , and the other things , like downsampling , upsampling , do n't seem to hurt and the new on - line normalization , neither . phd b: i ' m really confused about something . if we saw that making a small change like , a tenth , to the self - loop had a huge effect , can we really make any conclusions about differences in this ? phd f: so . there is first this thing , and then the , i computed the like , the confidence level on the different test sets . and for the - matched they are around point six percent . for the mismatched they are around like let 's say one point five percent . and for the - m hm they are also around one point five . professor a: but ok , so you these degradations you were talking about were on the - matched case . do the does the new filter make things better or worse for the other cases ? professor a: ok , so i the argument one might make is that , " , if you looked at one of these cases and you jiggle something and it changes then you 're not quite what to make of it . but when you look across a bunch of these and there 's some pattern , so h here 's all the if in all these different cases it never gets better , and there 's significant number of cases where it gets worse , then you 're probably hurting things , i would say . so at the very least that would be a reasonably prediction of what would happen with a different test set , that you 're not jiggling things with . so i the question is if you can do better than this . if you can if we can approximate the old numbers while still keeping the latency down . , so . what i was asking , though , is are what 's the level of communication with the o g i gang now , about this and phd f: , we are exchanging mail as soon as we have significant results . for the moment , they are working on integrating the spectral subtraction from ericsson . and so . we are working on our side on other things like also trying a sup spectral subtraction but of our own , another spectral substraction . so it 's ok . it 's going phd f: . for the moment they 're everybody 's quite there is this eurospeech deadline , so . phd f: . and . as soon as we have something that 's significant and that 's better than what was submitted , we will fix the system but we ' ve not discussed it this yet , professor a: sounds like a great idea but that he 's saying people are scrambling for a eurospeech deadline . but that 'll be , done in a week . so , maybe after this next one . phd f: we are we are trying to do something with the meeting recorder digits , and the good thing is that there is this first deadline , and , some people from ogi are working on a paper for this , but there is also the special session about th aurora which is which has an extended deadline . the deadline is in may . phd f: for th so f only for the experiments on aurora . so it 's good , professor a: that 's great ! it 's great . so we should definitely get something in for that . but on meeting digits , maybe there 's maybe . professor a: . so , that you could certainly start looking at the issue but it 's probably , on s from what stephane is saying , it 's unlikely to get active participation from the two sides until after they ' ve phd b: i could at least , i ' m going to be out next week but i could try to look into like this cvs over the web . that seems to be a very popular way of people distributing changes and over , multiple sites and things so maybe if figure out how do that easily and then pass the information on to everybody so that it 's , as easy to do as possible and people do n't it wo n't interfere with their regular work , then maybe that would be good . and we could use it for other things around here too . grad c: that 's . and if you 're interested in using cvs , i ' ve set it up here , phd b: i used it a long time ago but it 's been a while so maybe ask you some questions . grad c: so . i 'll be away tomorrow and monday but i 'll be back on tuesday or wednesday . professor a: dave , the other thing , actually , is this business about this wave form . maybe you and talk a little bit at some point about coming up with a better demonstration of the effects of reverberation for our web page , cuz the , actually the it made a good audio demonstration because when we could play that clip the really obvious difference is that you can hear two voices and in the second one and only hear professor a: no , it sound it sounds pretty reverberant , but you ca n't when you play it back in a room with a big room , nobody can hear that difference really . they hear that it 's lower amplitude and they hear there 's a second voice , professor a: that actually that makes for a perfectly good demo because that 's a real obvious thing , that you hear two voices . professor a: that 's ok . but for the visual , just , i 'd like to have , the spectrogram again , because you 're visual abilities as a human being are so good you can pick out , you look at the good one , you look at the cru the screwed up one , and you can see the features in it without trying to @ phd b: i noticed that in the pictures . " hey , th " i my initial thought was " this is not too bad ! " professor a: but you have to , if you look at it closely , you see " , here 's a place where this one has a big formant maj major formants here are moving quite a bit . " and then you look in the other one and they look practically flat . so you could that 's why i was thinking , in a section like that , you could take a look at just that part of the spectrogram and you could say " . this this really distorted it quite a bit . " phd b: the main thing that struck me in looking at those two spectrograms was the difference in the high frequencies . it looked like for the one that was farther away , it really everything was attenuated and that was the main visual thing that i noticed . professor a: but it 's so there are clearly are spectral effects . since you 're getting all this indirect energy , then a lot of it does have reduced high frequencies . but the other thing is the temporal courses of things really are changed , and we want to show that , in some obvious way . the reason i put the wave forms in there was because they do look quite different . and so " , this is good . " after after they were put in there i did n't really look at them anymore , cuz they were different . so i want something that has a is a more interesting explanation for why they 're different . grad c: so maybe we can just substitute one of these wave forms and then do some zoom in on the spectrogram on an interesting area . professor a: the other thing that we had in there that i did n't like was that the most obvious characteristic of the difference when you listen to it is that there 's a second voice , and the cuts that we have there actually do n't correspond to the full wave form . it 's just the first there was something where he was having some trouble getting so much in , or . i forget the reason behind it . but it 's the first six seconds of it and it 's in the seventh or eighth second where @ the second voice comes in . so we would like to actually see the voice coming in , too , since that 's the most obvious thing when you listen to it . phd f: i brought some i if some figures here . i start we started to work on spectral subtraction . and the preliminary results were very bad . so the thing that we did is just to add spectral subtraction before this , the wall process , which contains lda on - line normalization . and it hurts a lot . and so we started to look at things like this , which is , it 's so you have the c - zero parameters for one italian utterance . phd f: and i plotted this for two channels . channel zero is the close mic microphone , and channel one is the distant microphone . and it 's perfectly synchronized , and the sentence contain only one word , which is " due " and it ca n't clearly be seen . where where is it ? where is the word ? phd f: this is a plot of c - zero , when we do n't use spectral substraction , and when there is no on - line normalization . there is just some filtering with the lda and some downsampling , upsampling . phd b: c - zero is the close talking ? the close channel ? and s channel one is the phd f: so c - zero is very clean , actually . then when we apply mean normalization it looks like the second figure , though it is not . which is good . , the noise part is around zero and and then the third figure is what happens when we apply mean normalization and variance normalization . what we can clearly see is that on the speech portion the two channel come becomes very close , but also what happens on the noisy portion is that the variance of the noise is phd b: can i ask what does variance normalization do ? w what is the effect of that ? phd b: no . no , i understand what it is , but , what does it what 's what is professor a: , because everything if you have a system based on gaussians , everything is based on means and variances . so if there 's an overall reason , it 's like if you were doing image processing and in some of the pictures you were looking at , there was a lot of light and in some , there was low light , you would want to adjust for that in order to compare things . and the variance is just like the next moment , ? so what if one set of pictures was taken so that throughout the course it was went through daylight and night ten times , another time it went thr i is , how much vari professor a: or no . i a better example would be how much of the light was coming in from outside rather than artificial light . so if it was a lot if more was coming from outside , then there 'd be the bigger effect of the change in the so every mean every all of the parameters that you have , especially the variances , are going to be affected by the overall variance . and so , in principle , you if you remove that source , then , you can phd b: i see . ok . so would the major effect is that you 're gon na get is by normalizing the means , professor a: because , again , if you 're trying to distinguish between e and b if it just so happens that the e 's were a more , were recorded when the energy was larger , or the variation in it was larger , than with the b 's , then this will be give you some bias . so the it 's removing these sources of variability in the data that have nothing to do with the linguistic component . professor a: i is if if you have a good voice activity detector , is n't it gon na pull that out ? phd f: if they are good . what it shows is that , perhaps a good voice activity detector is good before on - line normalization and that 's what we ' ve already observed . but , voice activity detection is not an easy thing neither . phd b: but after you do this , after you do the variance normalization i , it seems like this would be a lot easier than this signal to work with . phd f: so . what i notice is that , while i prefer to look at the second figure than at the third one , because you clearly see where speech is . but the problem is that on the speech portion , channel zero and channel one are more different than when you use variance normalization where channel zero and channel one become closer . phd f: so , for i th that it perhaps it shows that the parameters that the voice activity detector should use have to use should be different than the parameter that have to be used for speech recognition . professor a: so you can do that by doing the voi voice activity detection . you also could do it by spect spectral subtraction before the variance normalization , phd f: , but it 's not clear , we . it 's just to the number that at that are here are recognition experiments on italian hm and mm with these two kinds of parameters . and , it 's better with variance normalization . professor a: so it does get better even though it looks ugly . but does this have the voice activity detection in it ? phd f: but the fact is that the voice activity detector does n't work on channel one . phd b: where at what stage is the voice activity detector applied ? is it applied here or a after the variance normalization ? phd f: it 's applied before variance normalization . so it 's a good thing , because i voice activity detection on this should could be worse . professor a: can i ask a , a top - level question , which is " if most of what the ogi folk are working with is trying to integrate this other spectral subtraction , why are we worrying about it ? " phd f: about ? spectral subtraction ? it 's just it 's another they are trying to u to use the ericsson and we 're trying to use something else . and . , and also to understand what happens because fff . when we do spectral subtraction , actually , that this is the two last figures . it seems that after spectral subtraction , speech is more emerging now than before . phd f: , the difference between the energy of the speech and the energy of the n spectral subtrac subtracted noise portion is larger . , if you compare the first figure to this one actually the scale is not the same , but if you look at the numbers you clearly see that the difference between the c - zero of the speech and c - zero of the noise portion is larger . but what happens is that after spectral subtraction , you also increase the variance of this of c - zero . and so if you apply variance normalization on this , it completely sc screw everything . . and what they did at ogi is just they do n't use on - line normalization , for the moment , on spectral subtraction as soon as they will try on - line normalization there will be a problem . so , we 're working on the same thing but with different system professor a: , i the intellectually it 's interesting to work on things th one way or the other but i ' m just wondering if on the list of things that there are to do , if there are things that we wo n't do because we ' ve got two groups doing the same thing . that 's just just asking . , it 's phd b: there also could be . maybe see a reason f for both working on it too if , if you work on something else and you 're waiting for them to give you spectral subtraction it 's hard to know whether the effects that you get from the other experiments you do will carry over once you then bring in their spectral subtraction module . so it 's almost like everything 's held up waiting for this one thing . i if that 's true or not , but i could see how professor a: i . , we still evidently have a latency reduction plan which is n't quite what you 'd like it to be . that that seems like one prominent thing . and then were n't issues of having a second stream ? that was was it there was this business that , we could use up the full forty - eight hundred bits , phd f: . but they ' we want to work on this . they also want to work on this , we we will try msg , but , and they are t they want to work on the second stream also , but more with some multi - band or , what they call trap or generalized trap . so . professor a: in june . , the other thing is that you saw that mail about the vad v a ds performing quite differently ? that that this there was this experiment of " what if we just take the baseline ? " set of features , just mel cepstra , and you inc incorporate the different v a and it looks like the french vad is actually better significantly better . phd f: but i which vad they use . if the use the small vad i th it 's on it 's easy to do better because it does n't work . i which one . it 's pratibha that did this experiment . we should ask which vad she used . phd d: i do n't @ . he actually , that he say with the good vad of from ogi and with the alcatel vad . and the experiment was sometime better , sometime worse . phd f: but i it 's you were talking about the other mail that used vad on the reference features . professor a: it was just better . it was enough better that it would account for a fair amount of the difference between our performance , actually . so if they have a better one , we should use it . it 's you ca n't work on everything . . phd f: , so we should find out if it 's really better . if it the compared to the small or the big network . and perhaps we can easily improve if we put like mean normalization before the vad . because as you ' ve mentioned . professor a: h hynek will be back in town the week after next , back in the country . and start organizing more visits and connections and , working towards june . phd d: also is stephane was thinking that maybe it was useful to f to think about voiced - unvoiced to work here in voiced - unvoiced detection . and we are looking in the signal . phd f: , my feeling is that actually when we look the proposals , ev everybody is still using some spectral envelope and it 's phd f: , not pitch , but to look at the fine at the high re high resolution spectrum . phd f: so . we do n't necessarily want to find the pitch of the sound cuz i have a feeling that when we look at the just at the envelope there is no way you can tell if it 's voiced and unvoiced , if there is some it 's it 's easy in clean speech because voiced sound are more low frequency and . so there would be more , there is the first formant , which is the larger and then voiced sound are more high frequencies cuz it 's frication when you have noise there is no if you have a low frequency noise it could be taken for voiced speech phd b: , i was just gon na say is n't there are n't there lots of ideas for doing voice activity , or speech - nonspeech rather , by looking at , i harmonics or looking across time professor a: , he was talking about the voiced - unvoiced , though , so , not the speech - nonspeech . phd b: even with e w , even with the voiced - non voiced - unvoiced that you or somebody was talking about professor a: . b we should let him finish what he w he was gon na say , phd f: , so , if we try to develop a second stream , there would be one stream that is the envelope and the second , it could be interesting to have that 's something that 's more related to the fine structure of the spectrum . , so i . we were thinking about like using ideas from larry saul , have a good voice detector , have a good , voiced - speech detector , that 's working on the fft phd f: larry saul could be an idea . we were are thinking about just taking the spectrum and computing the variance of the high resolution spectrum and things like this . professor a: so u s u so many tell you something about that . we had a guy here some years ago who did some work on making use of voicing information to help in reducing the noise . so what he was doing is y you do estimate the pitch . and you from that you estimate or you estimate fine harmonic structure , whichev ei either way , it 's more or less the same . but that you then can get rid of things that are not i if there is strong harmonic structure , you can throw away that 's non - harmonic . and that is another way of getting rid of part of the noise so that 's something that is finer , brings in a little more information than just spectral subtraction . and he had some , he did that in combination with rasta . it was like rasta was taking care of convolutional and he was and got some decent results doing that . so that 's another way . but , there 's there 's all these cues . we ' ve actually back when chuck was here we did some voiced - unvoiced classification using a bunch of these , professor a: works ok . it 's not perfect but that you ca n't given the constraints of this task , we ca n't , in a very way , feed forward to the recognizer the information the probabilistic information that you might get about whether it 's voiced or unvoiced , where w we ca n't affect the distributions or anything . but we what we i we could phd b: did n't the head dude send around that message ? , you sent us all a copy of the message , where he was saying that i ' m not , exactly , what the gist of what he was saying , but something having to do with the voice activity detector and that it will that people should n't put their own in . it was gon na be a professor a: i what you could do , maybe this would be w useful , if you have if you view the second stream , before you do klt 's and , if you do view it as probabilities , and if it 's an independent so , if it 's not so much envelope - based by fine - structure - based , looking at harmonicity like that , if you get a probability from that information and then multiply it by , multiply by all the voiced outputs and all the unvoiced outputs , then use that as the take the log of that or pre - nonlinearity , professor a: and do the klt on the on that , then that would i be a reasonable use of independent information . so maybe that 's what you meant . and then that would be phd f: , i was not thinking this , this could be an so you mean have some probability for the v the voicing professor a: it could be pretty small . if you have a tandem system and then you have some it can be pretty small net we used we d did some of this . i did , some years ago , and the and you use to use information primarily that 's different as you say , it 's more fine - structure - based than envelope - based so then it you can guarantee it 's that you 're not looking at very with the other one , and then you only use for this one distinction . professor a: and and so now you ' ve got a probability of the cases , and you ' ve got the probability of the finer categories on the other side . you multiply them where appropriate professor a: if they really are from independent information sources then they should have different kinds of errors and roughly independent errors , and it 's a good choice for , that 's a good idea . phd f: because , , spectral subtraction is good and we could u we could use the fine structure to have a better estimate of the noise but still there is this issue with spectral subtraction that it seems to increase the variance of it 's this musical noise which is annoying if you d you do some on - line normalization after . spectral subtraction and on - line normalization do n't seem to go together very . professor a: or if you do a spectral subtraction do some spectral subtraction first and then do some on - line normalization then do some more spectral subtraction , maybe you can do it layers so it does n't hurt too much . but it but , anyway i was arguing against myself there by giving that example cuz i was already suggesting that we should be careful about not spending too much time on exactly what they 're doing if you get if you go into a harmonics - related thing it 's definitely going to be different than what they 're doing should have some interesting properties in noise . i know that when have people have done the obvious thing of taking your feature vector and adding in some variables which are pitch related or that it has n't my impression it has n't particularly helped . has not . professor a: but that 's a question for this extending the feature vector versus having different streams . professor a: and and it may not have been noisy conditions . i do n't remember the example but it was on some darpa data and some years ago and so it probably was n't , actually phd f: but we were thinking , we discussed with barry about this , and perhaps thinking we were thinking about some sheet cheating experiment where we would use timit and see if giving the d , this voicing bit would help in terms of frame classification . professor a: why do n't you why do n't you just do it with aurora ? just any i in each frame phd f: . cuz we do n't have , for italian perhaps we have , but we do n't have this labeling for aurora . we just have a labeling with word models but not for phonemes . professor a: but you could you can align so that it 's not perfect , but if you if what was said phd b: but the problem is that their models are all word level models . so there 's no phone models that you get alignments for . you so you could find out where the word boundaries are but that 's about it . grad e: s but we could use the noisy version that timit , which , is similar to the noises found in the ti - digits portion of aurora . phd f: noise , . , that 's right , . , i we can say that it will help , but i . if this voicing bit does n't help , we do n't have to work more about this because . it 's just to know if it how much i it will help and to have an idea of how much we can gain . professor a: in experiments that we did a long time ago and different ta it was probably resource management , you were getting something like still eight or nine percent error on the voicing , as i recall . and , so professor a: what that said is that , left to its own devices , like without the a strong language model and , that you would make significant number of errors just with your probabilistic machinery in deciding phd b: , the though there was one problem with that in that , we used canonical mapping so our truth may not have really been true to the acoustics . professor a: back twenty years ago when i did this voiced - unvoiced , we were getting more like ninety - seven or ninety - eight percent correct in voicing . but that was speaker - dependent actually . we were doing training on a particular announcer and getting a very good handle on the features . and we did this complex feature selection thing where we looked the different possible features one could have for voicing and and exhaustively searched all size subsets and for that particular speaker and you 'd find the five or six features which really did on them . and then doing all of that we could get down to two or three percent error . but that , again , was speaker - dependent with lots of feature selection and a very complex thing . so i would believe that it was quite likely that looking at envelope only , that we 'd be significantly worse than that . phd f: and the all the speechcorders ? what 's the idea behind ? cuz they have to , they do n't even have to detect voiced spe speech ? professor a: they do analysis - by - synthesis . they try they try every possible excitation they have in their code book and find the one that matches best . phd b: can mention one other interesting thing ? . one of the ideas that we had come up with last week for things to try to improve the system actually i s we did n't i wrote this in after the meeting b but the thought i had was looking at the language model that 's used in the htk recognizer , which is just a big loop , phd b: so you it goes " digit " and then that can be either go to silence or go to another digit , which that model would allow for the production of infinitely long sequences of digits , so . " i ' m gon na just look at the what actual digit strings do occur in the training data . " and the interesting thing was it turns out that there are no sequences of two - long or three - long digit strings in any of the aurora training data . so it 's either one , four , five , six , up to eleven , and then it skips and then there 's some at sixteen . phd b: but thought that was a little odd , that there were no two or three long so for the heck of it , i made a little grammar which , had it 's separate path for each length digit string you could get . so there was a one - long path and there was a four - long and a five - long and i tried that and it got way worse . there were lots of deletions . so it was , i did n't have any weights of these paths or i did n't have anything like that . and i played with tweaking the word transition penalties a bunch , but i could n't go anywhere . but . " if i only allow " , i should have looked at to see how often there was a mistake where a two - long or a three - long path was actually put out as a hypothesis . but . so to do that right you 'd probably want to have allow for them all but then have weightings and things . so . thought that was a interesting thing about the data . ###summary: the meeting recorder group at berkeley met to discuss recent progress. of greatest interest was the progress on improving the latency and performance of their recogniser. there was also concern over overlap of work with partners ogi , and a lack of a good example of room reverberation for demonstrations. everyone must be sure and use the high-pass filtering option on the groups software , to deal with irregularities between mics. in order to coordinate better with ogi , some sort of source code control is required and me018 has offered to investigate , but only minimal progress can be made until after the upcoming deadline for eurospeech. when he returns me026 will help. also , in two weeks one of the ogi members will return , and meetings should be arranged with him before the next big project meeting. ogi seem to be having some good results with voice activation detection , so the group need to find out which is the best vad and start using it. the is a waveform example of room reverberation on the groups website that was used in a presentation. it turns out that it is a good example of many things , but not the reverb it is supposed to contain. need to find a better example , maybe by just looking at a closer section of waveform. minor experimenting found that by dropping the self-loop transition in the hmms by just 0.1% can increase performance by 10% , but the rules of the task forbid this change. there is some confusion over what the results produced mean , since it appears they are weighted , which biases improvements in some cases quite heavily. speaker me013 is worried that his groups work on spectral subtraction overlaps with that of ogi , and that it may be time better spent on other tasks. speaker mn007 and fn002 have been working on improving the recogniser performance as well as reducing it's latency. work on new filters has reduced latency , but made no improvement , though a slight reduction in performance occurred in the well matched case. also tried adding some spectral subtraction , but it doesn't work will with on-line normalization and at this stage is just hurting results. they have also been considering the possibility of using a second stream of data looking at the voicedness of the data , which would draw some ideas from previous work. me018 has been looking at the baseline system and feels it may be possible to decrease the run time of experiments by decreasing the iteration , and he has also got the five processor linux machine capable of running htks.
0
professor b: , the , torrent chip . we were two we were , we went through it jim and i went through old emails at one point and for two years there was this thing saying , we 're two months away from being done . it was very believable schedules , too . , we went through and with the schedules and we phd a: so , should we just do the same deal where we go around and do , status report things ? ok . and i when sunil gets here he can do his last . so . professor b: , i do n't do anything . i no , i ' m involved in discussions with people about what they 're doing , but they 're since they 're here , they can talk about it themselves . grad f: you 're gon na talk about aurora , per se ? , this past week i ' ve just been , getting down and dirty into writing my proposal . so , mmm . finished a section on , on talking about these intermediate categories that i want to classify , as a middle step . and , i hope to get this , a full rough draft done by , monday so give it to morgan . phd a: so , is the idea you 're going to do this paper and then you pass it out to everybody ahead of time and ? grad f: right , right . so , y you write up a proposal , and give it to people ahead of time , and you have a short presentation . and then , then everybody asks you questions . phd a: have you d ? i was just gon na ask , do you want to say any a little bit about it , phd a: wh - what you 're gon na you said you were talking about the , particular features that you were looking at , grad f: right . , i was , one of the perplexing problems is , for a while i was thinking that i had to come up with a complete set of intermediate features in intermediate categories to classify right away . but what i ' m thinking now is , i would start with a reasonable set . something something like , re regular phonetic features , just to start off that way . and do some phone recognition . , build a system that , classifies these , these feat , these intermediate categories using , multi - band techniques . combine them and do phon phoneme recognition . look at then i would look at the errors produced in the phoneme recognition and say , ok , i could probably reduce the errors if i included this extra feature or this extra intermediate category . that would that would reduce certain confusions over other confusions . and then and then reiterate . , build the intermediate classifiers . , do phoneme recognition . look at the errors . and then postulate new or remove , intermediate categories . and then do it again . grad f: , for that part of the process , i would use timit . and , then after , , doing timit . right ? , that 's just the ph the phone recognition task . , i wanted to take a look at , things that i could model within word . so , i would mov i would then shift the focus to , something like schw - switchboard , where i 'd i would be able to , to model , intermediate categories that span across phonemes , not just within the phonemes , themselves , and then do the same process there , on a large vocabulary task like switchboard . and for that part i would i 'd use the sri recognizer since it 's already set up for switchboard . and i 'd run some tandem - style processing with , my intermediate classifiers . phd a: . so that 's why you were interested in getting your own features into the sri files . grad f: . that 's why i was asking about that . and i that 's it . any any questions ? grad e: ok , so , last week i finally got results from the sri system about this mean subtraction approach . and , we got an improvement , in word error rate , training on the ti - digits data set and testing on meeting recorder digits of , six percent to four point five percent , on the n on the far - mike data using pzm f , but , the near - mike performance worsened , from one point two percent to two point four percent . and , wh why would that be , considering that we actually got an improvement in near - mike performance using htk ? and so , with some input from , andreas , i have a theory in two parts . , first of all htk , sr - the sri system is doing channel adaptation , and so htk was n't . so this , this mean subtraction approach will do a channel normalization and so that might have given the htk use of it a boost that would n't have been applied in the sri case . and also , the andreas pointed out the sri system is using more parameters . it 's got finer - grained acoustic models . so those finer - grained acoustic models could be more sensitive to the artifacts in the re - synthesized audio . and me and barry were listening to the re - synthesized audio and sometimes it seems like you get of a bit of an echo of speech in the background . and so that seems like it could be difficult for training , cuz you could have different phones lined up with a different foreground phone , depending on the timing of the echo . i ' m gon na try training on a larger data set , and then , the system will have seen more examples o of these artifacts and hopefully will be more robust to them . so i ' m planning to use the macrophone set of , read speech , and , professor b: i had another thought just now , which is , remember we were talking before about we were talking in our meeting about , this that some of the other that avendano did , where they were , getting rid of low - energy sections ? , if you did a high - pass filtering , as hirsch did in late eighties to reduce some of the effects of reverberation , avendano and hermansky were arguing that , perhaps one of the reasons for that working was ma may not have even been the filtering so much but the fact that when you filter a an all - positive power spectrum you get some negative values , and you got ta figure out what to do with them if you 're gon na continue treating this as a power spectrum . so , what hirsch did was , set them to zero set the negative values to zero . so if you imagine a waveform that 's all positive , which is the time trajectory of energy , and , shifting it downwards , and then getting rid of the negative parts , that 's essentially throwing away the low - energy things . and it 's the low - energy parts of the speech where the reverberation is most audible . , you have the reverberation from higher - energy things showing up in so in this case you have some artificially imposed reverberation - like thing . , you 're getting rid of some of the other effects of reverberation , but because you have these non - causal windows , you 're getting these funny things coming in , at n and , what if you did ? , there 's nothing to say that the processing for this re - synthesis has to be restricted to trying to get it back to the original , according to some equation . , you also could , just try to make it nicer . professor b: and one of the things you could do is , you could do some vad - like thing and you actually could take very low - energy sections and set them to some , very low or near zero value . , i ' m just saying if it turns out that these echoes that you 're hearing are , or pre - echoes , whichever they are , part of what 's causing the problem , you actually could get rid of them . be pretty simple . , you do it in a pretty conservative way professor b: so that if you made a mistake you were more likely to keep in an echo than to throw out speech . phd g: on , the one what the s in the speech that you are using like ? professor b: so it 's these are just microphone this micro close microphone and a distant microphone , he 's doing these different tests on . , we should do a measurement in here . i g think we never have . it 's i would , point seven , point eight seconds f , r t professor b: so . but the other thing is , he 's putting in w i was using the word " reverberation " in two ways . he 's also putting in , a he 's taking out some reverberation , but he 's putting in something , because he has averages over multiple windows stretching out to twelve seconds , which are then being subtracted from the speech . and since , what you subtract , sometimes you 'll be subtracting from some larger number and sometimes you wo n't . and professor b: so you can end up with some components in it that are affected by things that are seconds away . , and if it 's a low energy compo portion , you might actually hear some funny things . grad e: o o one thing , i noticed is that , the mean subtraction seems to make the pzm signals louder after they ' ve been re - synthesized . so i was wondering , is it possible that one reason it helped with the aurora baseline system is just as a gain control ? cuz some of the pzm signals sound pretty quiet if you do n't amplify them . professor b: i do n't think just multiplying the signal by two would have any effect . , if you really have louder signals , what you mean is that you have better signal - to - noise ratio . professor b: so if what you 're doing is improving the signal - to - noise ratio , then it would be better . but just it being bigger if with the same signal - to - noise ratio phd c: , the system is use the absolute energy , so it 's a little bit dependent on the signal level . but , not so much , i . professor b: so if the if you change in both training and test , the absolute level by a factor of two , it will n have no effect . phd a: did you add this data to the training set , for the aurora ? or you just tested on this ? phd a: , morgan was just saying that , as long as you do it in both training and testing , it should n't have any effect . grad e: right . i trained on clean ti - digits . i did the mean subtraction on clean ti - digits . but i did n't i ' m not if it made the clean ti ti - digits any louder . grad e: i . if it 's if it 's like , if it 's trying to find a reverberation filter , it could be that this reverberation filter is making things quieter . and then if you take it out that taking it out makes things louder . professor b: , no . , there 's nothing inherent about removing if you 're really removing , professor b: it might just be some artifact of the processing that , if you 're i . phd a: i wonder if there could be something like , for s for the pzm data , , if occasionally , somebody hits the table , you could get a spike . i ' m just wondering if there 's something about the , doing the mean normalization where , it could you to have better signal - to - noise ratio . professor b: , there is this . a minute . it it i maybe i if , subtracting the mean log spectrum is like dividing by the spectrum . so , depending what you divide by , if your if s your estimate is off and sometimes you 're getting a small number , you could make it bigger . so , it 's just a question of there 's it it could be that there 's some normalization that 's missing , to make it , y you 'd think it should n't be larger , but maybe in practice it is . that 's something to think about . phd c: i had a question about the system the sri system . so , you trained it on ti - digits ? but except this , it 's exactly the same system as the one that was tested before and that was trained on macrophone . right ? so on ti - digits it gives you one point two percent error rate and on macrophone it 's still o point eight . , but is it exactly the same system ? grad e: so . if you 're talking about the macrophone results that andreas had about , a week and a half ago , it 's the same system . phd c: so you use vtl - , vocal tract length normalization and , like mllr transformations also , and professor b: i ' m , was his point eight percent , er , a result on testing on macrophone or training ? professor b: so that was done already . so we were , and it 's point eight ? phd c: i ' ve just been text testing the new aurora front - end with , aurora system actually so front - end and htk , acoustic models on the meeting digits and it 's a little bit better than the previous system . we have i have two point seven percent error rate . and before with the system that was proposed , it 's what ? it was three point nine . so . phd c: . , . so , we have the new lda filters , maybe i did n't look , but one thing that makes a difference is this dc offset compensation . , do y did you have a look at the meet , meeting digits , if they have a dc component , or ? phd g: no . the dc component could be negligible . , if you are recording it through a mike . , any all of the mikes have the dc removal some capacitor sitting right in that bias it . professor b: but this , no . because , there 's a sample and hold in the a - tod and these period these typically do have a dc offset . phd g: , so it is the digital it 's the a - tod that introduces the dc in . professor b: but but , typi , unless actually , there are instrumentation mikes that do pass go down to dc . no , it 's the electronics . and they and then there 's amplification afterwards . and you can get , it was in the wall street journal data that i ca n't remember , one of the darpa things . there was this big dc - dc offset we did n't know about for a while , while we were messing with it . and we were getting these terrible results . and then we were talking to somebody and they said , " , . did n't ? everybody knows that . there 's all this dc offset in th " so , yes . you can have dc offset in the data . grad e: and i also , did some experiments about normalizing the phase . so i c i came up with a web page that people can take a look at . the interesting thing that i tried was , adam and morgan had this idea , since my original attempts to , take the mean of the phase spectra over time and normalize using that , by subtracting that off , did n't work . , so , that we thought that might be due to , problems with , the arithmetic of phases . they they add in this modulo two pi way there 's reason to believe that approach of taking the mean of the phase spectrum was n't really mathematically correct . so , what i did instead is i took the mean of the fft spectrum without taking the log or anything , and then i took the phase of that , and i subtracted that phase off to normalize . but that , did n't work either . professor b: see , we have a different interpretation of this . he says it does n't work . i said , it works magnificently , but just not for the task we intended . , it gets rid of the speech . professor b: , it leaves the junk . , it 's tremendous . you see , all he has to do is go back and reverse what he did before , and he 's really got something . professor b: ex - exactly . , you got it . so , it 's a general rule . professor b: just listen very carefully to what i say and do the opposite . including what said . phd c: . maybe , concerning these d still , these meeting digits . i ' m more interested in trying to figure out what 's still the difference between the sri system and the aurora system . so , i will maybe train , like , gender - dependent models , because this is also one big difference between the two systems . the other differences were the fact that maybe the acoustic models of the sri are more sri system are more complex . but , chuck , you did some experiments with this professor b: , it sounds like they also have he 's saying they have all these , different kinds of adaptation . , they have channel adaptation . they have speaker adaptation . phd a: like they do , i ' m not how they would do it when they 're working with the digits , phd a: but , like , in the switchboard data , there 's , conversation - side normalization for the non - c - zero components , phd c: . this is another difference . their normalization works like on the utterance levels . but we have to do it we have a system that does it on - line . phd c: so , it might be better with it might be worse if the channel is constant , or nnn . phd g: and the acoustic models are like - k triphone models or is it the whole word ? professor b: it 's probably more than that . , so they have i thin think they use these , genone things . so there 's these , pooled models and they can go out to all sorts of dependencies . professor b: they have tied states and i do n't real i ' m talk i ' m just guessing here . but i think they do n't just have triphones . they have a range of , dependencies . phd c: , the first thing i that i want to do is just maybe these gender things . and maybe see with andreas if , i how much it helps , what 's the model . phd a: so so the n on the numbers you got , the two point seven , is that using the same training data that the sri system used and got one point two ? grad e: for with the sri system , the aurora baseline is set up with these , this version of the clean training set that 's been filtered with this g - seven - one - two filter , to train the sri system on digits s - andreas used the original ti - digits , under u doctor - speech data ti - digits , which do n't have this filter . but i do n't think there 's any other difference . professor b: so is that ? , are these results comparable ? so you were getting with the , aurora baseline something like two point four percent on clean ti - digits , when , training the sri system with clean tr digits ti - digits . right ? professor b: and , so , is your two point seven comparable , where you 're , using , the submitted system ? professor b: you you were htk . that 's right . ok , so the comparable number then , for what you were talking about then , since it was htk , would be the , two point f grad e: do you mean the b ? the baseline aurora - two system , trained on ti - digits , tested on meeting recorder near , we saw in it today , and it was about six point six percent . phd c: they are helping . and another thing i maybe would like to do is to just test the sri system that 's trained on macrophone test it on , the noisy ti - digits , cuz i ' m still wondering where this improvement comes from . when you train on macrophone , it seems better on meeting digits . but i wonder if it 's just because maybe macrophone is acoustically closer to the meeting digits than ti - digit is , which is ti - digits are very clean recorded digits phd a: , it would also be interesting to see , to do the regular aurora test , phd c: that 's . that 's what i wanted , just , so , just using the sri system , test it on and test it on aurora ti - digits . phd a: , i the work would be into getting the files in the right formats , . right ? because when you train up the aurora system , you 're also training on all the data . , it 's professor b: that 's true , but that also when we ' ve had these meetings week after week , oftentimes people have not done the full arrange of things because on whatever it is they 're trying , because it 's a lot of work , even just with the htk . so , it 's a good idea , but it seems like it makes sense to do some pruning first with a test or two that makes sense for you , and then take the likely candidates and go further . phd c: but , just testing on ti - digits would already give us some information about what 's going on . , . ok . , the next thing is this vad problem that , so , i ' m just talking about the curves that i sent you so , whi that shows that when the snr decrease , the current vad approach does n't drop much frames for some particular noises , which might be then noises that are closer to speech , acoustically . professor b: i i just to clarify something for me . they were supp supposedly , in the next evaluation , they 're going to be supplying us with boundaries . so does any of this matter ? , other than our interest in it . phd c: first of all , the boundaries might be , like we would have t two hundred milliseconds or before and after speech . so removing more than that might still make a difference in the results . professor b: do we ? , is there some reason that we think that 's the case ? phd c: no . because we do n't did n't looked that much at that . but , still , it 's an interesting problem . professor b: but maybe we 'll get some insight on that when , the gang gets back from crete . because there 's lots of interesting problems , . and then if they really are going to have some means of giving us fairly tight , boundaries , then that wo n't be so much the issue . but i . phd g: because w we were wondering whether that vad is going to be , like , a realistic one or is it going to be some manual segmentation . and then , like , if that vad is going to be a realistic one , then we can actually use their markers to shift the point around , the way we want to find a , rather than keeping the twenty frames , we can actually move the marker to a point which we find more suitable for us . phd g: but if that is going to be something like a manual , segmenter , then we ca n't use that information anymore , because that 's not going to be the one that is used in the final evaluation . so . we what is the type of vad which they 're going to provide . phd c: and actually there 's there 's an , it 's still for even for the evaluation , it might still be interesting to work on this because the boundaries that they would provide is just , starting of speech and end of speech , at the utterance level . phd g: with some gap . , with some pauses in the center , provided they meet that whatever the hang - over time which they are talking . professor b: so if you could get at some of that , although that 'd be hard . phd c: it might be useful for , like , noise estimation , and a lot of other things that we want to work on . phd c: but so i did started to test putting together two vad which was not much work actually . i i m re - implemented a vad that 's very close to the , energy - based vad that , the other aurora guys use . so , which is just putting a threshold on the noise energy , and , detect detecting the first group of four frames that have a energy that 's above this threshold , and , from this point , tagging the frames there as speech . so it removes the first silent portion of each utterance . and it really removes it , still o on the noises where our mlp vad does n't work a lot . professor b: mmm . cuz i would have thought that having some spectral information , in the old days people would use energy and zero crossings , would give you some better performance . cuz you might have low - energy fricatives or , stop consonants , like that . professor b: , that if you d if you use purely energy and do n't look at anything spectral , then you do n't have a good way of distinguishing between low - energy speech components and nonspeech . and , just as a gross generalization , most nonsp many nonspeech noises have a low - pass characteristic , some slope . and and most , low - energy speech components that are unvoiced have a high - pass characteristic an upward slope . so having some a , at the beginning of a of an s sound , just starting in , it might be pretty low - energy , but it will tend to have this high - frequency component . whereas , a lot of rumble , and background noises , and will be predominantly low - frequency . , by itself it 's not enough to tell you , but it plus energy is it plus energy plus timing information is , if you look up in rabiner and schafer from like twenty - five years ago , that 's what they were using then . so it 's not a phd c: so , . it it might be that what i did is so , removes like low , low - energy , speech frames . because the way i do it is i just combine the two decisions so , the one from the mlp and the one from the energy - based with the and operator . so , i only keep the frames where the two agree that it 's speech . so if the energy - based dropped low - energy speech , mmm , they are lost . but s still , the way it 's done right now it helps on the noises where it seems to help on the noises where our vad was not very good . professor b: , i one could imagine combining them in different ways . i what you 're saying is that the mlp - based one has the spectral information . phd c: the way i use a an a " and " operator is so , it i , phd c: the frames that are dropped by the energy - based system are , dropped , even if the , mlp decides to keep them . professor b: but , i in principle what you 'd want to do is have a , a probability estimated by each one and put them together . phd a: something that i ' ve used in the past is , when just looking at the energy , is to look at the derivative . and you make your decision when the derivative is increasing for so many frames . then you say that 's beginning of speech . but , i ' m trying to remember if that requires that you keep some amount of speech in a buffer . i it depends on how you do it . but , that 's been a useful thing . phd g: . , every everywhere has a delay associated with it . , you still have to k always keep a buffer , then only make a decision because you still need to smooth the decision further . so that 's always there . phd c: , actually if i do n't maybe do n't want to work too much of on it right now . wanted to see if it 's what i observed was the re was caused by this vad problem . and it seems to be the case . , the second thing is the this spectral subtraction . which i ' ve just started yesterday to launch a bunch of , { nonvocalsound } twenty - five experiments , with different , values for the parameters that are used . it 's the makhoul - type spectral subtraction which use an over - estimation factor . so , we substr i subtract more , { nonvocalsound } noise than the noise spectra that is estimated on the noise portion of the s , the utterances . so i tried several , over - estimation factors . and after subtraction , i also add a constant noise , and i also try different , noise , values and we 'll see what happen . but st still when we look at the , it depends on the parameters that you use , but for moderate over - estimation factors and moderate noise level that you add , you st have a lot of musical noise . on the other hand , when you subtract more and when you add more noise , you get rid of this musical noise but maybe you distort a lot of speech . , it until now , it does n't seem to help . we 'll see . so the next thing , maybe i what i will try to do is just to try to smooth mmm , the , to smooth the d the result of the subtraction , to get rid of the musical noise , using some filter , phd g: can smooth the snr estimate , also . your filter is a function of snr . ? phd c: so , to get something that 's would be closer to what you tried to do with wiener filtering . phd g: so , u th i ' ve been playing with this wiener filter , like . and there are there were some bugs in the program , so i was p initially trying to clear them up . because one of the bug was i was assuming that always the vad , the initial frames were silence . it always started in the silence state , but it was n't for some utterances . so the it was n't estimating the noise initially , and then it never estimated , because i assumed that it was always silence . phd c: so this is on speechdat - car italian ? so , in some cases s there are also phd g: speechdat - car italian . there 're a few cases , actually , which i found later , that there are . phd g: so that was one of the bugs that was there in estimating the noise . and , so once it was cleared , i ran a few experiments with different ways of smoothing the estimated clean speech and how t estimated the noise and , smoothing the snr also . and so the trend seems to be like , smoothing the current estimate of the clean speech for deriving the snr , which is like deriving the wiener filter , seems to be helping . then updating it quite fast using a very small time constant . so we 'll have , like , a few results where the estimating the more smoothing is helping . but still it 's like it 's still comparable to the baseline . i have n't got anything beyond the baseline . but that 's , like , not using any wiener filter . and , so i ' m trying a few more experiments with different time constants for smoothing the noise spectrum , and smoothing the clean speech , and smoothing snr . so there are three time constants that i have . so , i ' m just playing around . so , one is fixed in the line , like smoothing the clean speech is helping , so i ' m not going to change it that much . but , the way i ' m estimating the noise and the way i ' m estimating the snr , i ' m just trying a little bit . so , that h and the other thing is , like , putting a floor on the , snr , because that if some in some cases the clean speech is , like when it 's estimated , it goes to very low values , so the snr is , like , very low . and so that actually creates a lot of variance in the low - energy region of the speech . so , i ' m thinking of , like , putting a floor also for the snr so that it does n't vary a lot in the low - energy regions . and , . so . the results are , like so far i ' ve been testing only with the baseline , which is which does n't have any lda filtering and on - line normalization . want to separate the contributions out . so it 's just vad , plus the wiener filter , plus the baseline system , which is , just the spectral , the mel sp mel , frequency coefficients . and the other thing that i tried was but took of those , carlos filters , which hynek had , to see whether it really h helps or not . , it was just a run to see whether it really degrades or it helps . it 's it seems to be like it 's not hurting a lot by just blindly picking up one filter which is nothing but a four hertz a band - pass m filter on the cubic root of the power spectrum . so , that was the filter that hy - , carlos had . so . just just to see whether it really it 's is it worth trying or not . so , it does n't seems to be degrading a lot on that . so there must be something that can be done with that type of noise compensation also , which i would ask carlos about that . , how he derived those filters and where d if he has any filters which are derived on ogi stories , added with some type of noise which what we are using currently , like that . so maybe i 'll professor b: so , if you have this band - pass filter , you probably get n you get negative values . phd g: and i ' m , like , floating it to z zeros right now . so it has , like the spectrogram has , like it actually , enhances the onset and offset of , the begin and the end of the speech . so it 's there seems to be , like , deep valleys in the begin and the end of , like , high - energy regions , because the filter has , like , a mexican - hat type structure . so , those are the regions where there are , like when i look at the spectrogram , there are those deep valleys on the begin and the end of the speech . but the rest of it seems to be , like , pretty . so . that 's something i observe using that filter . there are a few very not a lot of because the filter does n't have a really a deep negative portion , so that it 's not really creating a lot of negative values in the cubic root . i 'll s may continue with that for some w i 'll maybe i 'll ask carlos a little more about how to play with those filters , and but while making this wiener filter better . that that 's it , morgan . phd g: i would actually m did n't get enough time to work on the subspace last week . it was mostly about finding those bugs th , things , and i did n't work much on that . phd d: , i am still working with , vts . and , one of the things that last week , say here is that maybe the problem was with the diff because the signal have different level of energy . phd d: and , maybe , talking with stephane and with sunil , we decide that maybe it was interesting to apply on - line normalization before applying vts . but then we decided that 's it does n't work , because we modified also the noise . and , thinking about that , we then we decide that maybe is a good idea . we . i do n't hav i do n't this is i did n't do the experiment yet to apply vts in cepstral domain . professor b: the other thing is so so , in i not and c - zero would be a different so you could do a different normalization for c - zero than for other things anyway . , the other thing i was gon na suggest is that you could have two kinds of normalization with , different time constants . so , you could do some normalization s , before the vts , and then do some other normalization after . but but c - zero certainly acts differently than the others do , so that 's phd d: , we s decide to m to obtain the new expression if we work in the cepstral domain . i am working in that now , but i ' m not if that will be usefu useful . i . it 's k it 's k it 's quite a lot it 's a lot of work . , it 's not too much , but this it 's work . and i want to know if we have some feeling that the result i would like to know if i do n't have any feeling if this will work better than apply vts aft in cepstral domain will work better than apply in m mel in filter bank domain . i r i ' m not . i do n't i nothing . professor b: . , you 're the first one here to work with vts , so , maybe we could call someone else up who has , ask them their opinion . i do n't have a good feeling for it . phd c: actually , the vts that you tested before was in the log domain and so the codebook is e dependent on the level of the speech signal . phd c: so i expect it if if you have something that 's independent of this , i expect it to , be a better model of speech . professor b: you you would n't even need to switch to cepstra . , you can just normalize the professor b: and then you have one number which is very dependent on the level cuz it is the level , phd c: but here also we would have to be careful about removing the mean of speech not of noise . phd d: we i was thinking to estimate the noise with the first frames and then apply the vad , before the on - line normalization . we we see , i am thinking about that and working about that , but i do n't have result this week . professor b: , one of the things we ' ve talked about maybe it might be star time to start thinking about pretty soon , is as we look at the pros and cons of these different methods , how do they fit in with one another ? because we ' ve talked about potentially doing some combination of a couple of them . maybe maybe pretty soon we 'll have some sense of what their characteristics are , so we can see what should be combined .
a typical progress report meeting for the icsi meeting recorder group at berkeley. each of the group reported their most recent progress , and any results they have achieved. this then prompted discussion about the reasons behind such findings , which were for the most part not as expected. topics the group touched upon included spectral subtraction , phase normalization , voice activity detection , along with comparisons between systems. *na* a couple of issues have arisen that need to be looked into further such as dc-offset and effects of pzm signals. speaker fn002 is worried about running vts in the cepstral domain , because it requires a lot of work , and it is not clear that it will be much better than running it in the mel domain. similarly , since at the next stage of the project data will have marked boundaries , it is not clear that voice-activity detection is worth pursuing. speaker me026 has been trying mean subtraction on the sri system , with good improvement for the far mike , though worse on near mike. this contradicts previous findings for htk , though he has some theories to explain the difference. has also been working on different phase normalization techniques , with no luck. speaker mn007 has also been looking at differences , differences between the sri system and the groups aurora project , and again , there are a number of possible explanations. he has also been looking at a problem with the vad and snr. mn052 has sorted bugs in implementation of wiener filtering , and has been investigating smoothing and also snr. fn002 is ready to run experiment investigating vts in the cepstral domain. speaker me006 is still working on his proposal.
###dialogue: professor b: , the , torrent chip . we were two we were , we went through it jim and i went through old emails at one point and for two years there was this thing saying , we 're two months away from being done . it was very believable schedules , too . , we went through and with the schedules and we phd a: so , should we just do the same deal where we go around and do , status report things ? ok . and i when sunil gets here he can do his last . so . professor b: , i do n't do anything . i no , i ' m involved in discussions with people about what they 're doing , but they 're since they 're here , they can talk about it themselves . grad f: you 're gon na talk about aurora , per se ? , this past week i ' ve just been , getting down and dirty into writing my proposal . so , mmm . finished a section on , on talking about these intermediate categories that i want to classify , as a middle step . and , i hope to get this , a full rough draft done by , monday so give it to morgan . phd a: so , is the idea you 're going to do this paper and then you pass it out to everybody ahead of time and ? grad f: right , right . so , y you write up a proposal , and give it to people ahead of time , and you have a short presentation . and then , then everybody asks you questions . phd a: have you d ? i was just gon na ask , do you want to say any a little bit about it , phd a: wh - what you 're gon na you said you were talking about the , particular features that you were looking at , grad f: right . , i was , one of the perplexing problems is , for a while i was thinking that i had to come up with a complete set of intermediate features in intermediate categories to classify right away . but what i ' m thinking now is , i would start with a reasonable set . something something like , re regular phonetic features , just to start off that way . and do some phone recognition . , build a system that , classifies these , these feat , these intermediate categories using , multi - band techniques . combine them and do phon phoneme recognition . look at then i would look at the errors produced in the phoneme recognition and say , ok , i could probably reduce the errors if i included this extra feature or this extra intermediate category . that would that would reduce certain confusions over other confusions . and then and then reiterate . , build the intermediate classifiers . , do phoneme recognition . look at the errors . and then postulate new or remove , intermediate categories . and then do it again . grad f: , for that part of the process , i would use timit . and , then after , , doing timit . right ? , that 's just the ph the phone recognition task . , i wanted to take a look at , things that i could model within word . so , i would mov i would then shift the focus to , something like schw - switchboard , where i 'd i would be able to , to model , intermediate categories that span across phonemes , not just within the phonemes , themselves , and then do the same process there , on a large vocabulary task like switchboard . and for that part i would i 'd use the sri recognizer since it 's already set up for switchboard . and i 'd run some tandem - style processing with , my intermediate classifiers . phd a: . so that 's why you were interested in getting your own features into the sri files . grad f: . that 's why i was asking about that . and i that 's it . any any questions ? grad e: ok , so , last week i finally got results from the sri system about this mean subtraction approach . and , we got an improvement , in word error rate , training on the ti - digits data set and testing on meeting recorder digits of , six percent to four point five percent , on the n on the far - mike data using pzm f , but , the near - mike performance worsened , from one point two percent to two point four percent . and , wh why would that be , considering that we actually got an improvement in near - mike performance using htk ? and so , with some input from , andreas , i have a theory in two parts . , first of all htk , sr - the sri system is doing channel adaptation , and so htk was n't . so this , this mean subtraction approach will do a channel normalization and so that might have given the htk use of it a boost that would n't have been applied in the sri case . and also , the andreas pointed out the sri system is using more parameters . it 's got finer - grained acoustic models . so those finer - grained acoustic models could be more sensitive to the artifacts in the re - synthesized audio . and me and barry were listening to the re - synthesized audio and sometimes it seems like you get of a bit of an echo of speech in the background . and so that seems like it could be difficult for training , cuz you could have different phones lined up with a different foreground phone , depending on the timing of the echo . i ' m gon na try training on a larger data set , and then , the system will have seen more examples o of these artifacts and hopefully will be more robust to them . so i ' m planning to use the macrophone set of , read speech , and , professor b: i had another thought just now , which is , remember we were talking before about we were talking in our meeting about , this that some of the other that avendano did , where they were , getting rid of low - energy sections ? , if you did a high - pass filtering , as hirsch did in late eighties to reduce some of the effects of reverberation , avendano and hermansky were arguing that , perhaps one of the reasons for that working was ma may not have even been the filtering so much but the fact that when you filter a an all - positive power spectrum you get some negative values , and you got ta figure out what to do with them if you 're gon na continue treating this as a power spectrum . so , what hirsch did was , set them to zero set the negative values to zero . so if you imagine a waveform that 's all positive , which is the time trajectory of energy , and , shifting it downwards , and then getting rid of the negative parts , that 's essentially throwing away the low - energy things . and it 's the low - energy parts of the speech where the reverberation is most audible . , you have the reverberation from higher - energy things showing up in so in this case you have some artificially imposed reverberation - like thing . , you 're getting rid of some of the other effects of reverberation , but because you have these non - causal windows , you 're getting these funny things coming in , at n and , what if you did ? , there 's nothing to say that the processing for this re - synthesis has to be restricted to trying to get it back to the original , according to some equation . , you also could , just try to make it nicer . professor b: and one of the things you could do is , you could do some vad - like thing and you actually could take very low - energy sections and set them to some , very low or near zero value . , i ' m just saying if it turns out that these echoes that you 're hearing are , or pre - echoes , whichever they are , part of what 's causing the problem , you actually could get rid of them . be pretty simple . , you do it in a pretty conservative way professor b: so that if you made a mistake you were more likely to keep in an echo than to throw out speech . phd g: on , the one what the s in the speech that you are using like ? professor b: so it 's these are just microphone this micro close microphone and a distant microphone , he 's doing these different tests on . , we should do a measurement in here . i g think we never have . it 's i would , point seven , point eight seconds f , r t professor b: so . but the other thing is , he 's putting in w i was using the word " reverberation " in two ways . he 's also putting in , a he 's taking out some reverberation , but he 's putting in something , because he has averages over multiple windows stretching out to twelve seconds , which are then being subtracted from the speech . and since , what you subtract , sometimes you 'll be subtracting from some larger number and sometimes you wo n't . and professor b: so you can end up with some components in it that are affected by things that are seconds away . , and if it 's a low energy compo portion , you might actually hear some funny things . grad e: o o one thing , i noticed is that , the mean subtraction seems to make the pzm signals louder after they ' ve been re - synthesized . so i was wondering , is it possible that one reason it helped with the aurora baseline system is just as a gain control ? cuz some of the pzm signals sound pretty quiet if you do n't amplify them . professor b: i do n't think just multiplying the signal by two would have any effect . , if you really have louder signals , what you mean is that you have better signal - to - noise ratio . professor b: so if what you 're doing is improving the signal - to - noise ratio , then it would be better . but just it being bigger if with the same signal - to - noise ratio phd c: , the system is use the absolute energy , so it 's a little bit dependent on the signal level . but , not so much , i . professor b: so if the if you change in both training and test , the absolute level by a factor of two , it will n have no effect . phd a: did you add this data to the training set , for the aurora ? or you just tested on this ? phd a: , morgan was just saying that , as long as you do it in both training and testing , it should n't have any effect . grad e: right . i trained on clean ti - digits . i did the mean subtraction on clean ti - digits . but i did n't i ' m not if it made the clean ti ti - digits any louder . grad e: i . if it 's if it 's like , if it 's trying to find a reverberation filter , it could be that this reverberation filter is making things quieter . and then if you take it out that taking it out makes things louder . professor b: , no . , there 's nothing inherent about removing if you 're really removing , professor b: it might just be some artifact of the processing that , if you 're i . phd a: i wonder if there could be something like , for s for the pzm data , , if occasionally , somebody hits the table , you could get a spike . i ' m just wondering if there 's something about the , doing the mean normalization where , it could you to have better signal - to - noise ratio . professor b: , there is this . a minute . it it i maybe i if , subtracting the mean log spectrum is like dividing by the spectrum . so , depending what you divide by , if your if s your estimate is off and sometimes you 're getting a small number , you could make it bigger . so , it 's just a question of there 's it it could be that there 's some normalization that 's missing , to make it , y you 'd think it should n't be larger , but maybe in practice it is . that 's something to think about . phd c: i had a question about the system the sri system . so , you trained it on ti - digits ? but except this , it 's exactly the same system as the one that was tested before and that was trained on macrophone . right ? so on ti - digits it gives you one point two percent error rate and on macrophone it 's still o point eight . , but is it exactly the same system ? grad e: so . if you 're talking about the macrophone results that andreas had about , a week and a half ago , it 's the same system . phd c: so you use vtl - , vocal tract length normalization and , like mllr transformations also , and professor b: i ' m , was his point eight percent , er , a result on testing on macrophone or training ? professor b: so that was done already . so we were , and it 's point eight ? phd c: i ' ve just been text testing the new aurora front - end with , aurora system actually so front - end and htk , acoustic models on the meeting digits and it 's a little bit better than the previous system . we have i have two point seven percent error rate . and before with the system that was proposed , it 's what ? it was three point nine . so . phd c: . , . so , we have the new lda filters , maybe i did n't look , but one thing that makes a difference is this dc offset compensation . , do y did you have a look at the meet , meeting digits , if they have a dc component , or ? phd g: no . the dc component could be negligible . , if you are recording it through a mike . , any all of the mikes have the dc removal some capacitor sitting right in that bias it . professor b: but this , no . because , there 's a sample and hold in the a - tod and these period these typically do have a dc offset . phd g: , so it is the digital it 's the a - tod that introduces the dc in . professor b: but but , typi , unless actually , there are instrumentation mikes that do pass go down to dc . no , it 's the electronics . and they and then there 's amplification afterwards . and you can get , it was in the wall street journal data that i ca n't remember , one of the darpa things . there was this big dc - dc offset we did n't know about for a while , while we were messing with it . and we were getting these terrible results . and then we were talking to somebody and they said , " , . did n't ? everybody knows that . there 's all this dc offset in th " so , yes . you can have dc offset in the data . grad e: and i also , did some experiments about normalizing the phase . so i c i came up with a web page that people can take a look at . the interesting thing that i tried was , adam and morgan had this idea , since my original attempts to , take the mean of the phase spectra over time and normalize using that , by subtracting that off , did n't work . , so , that we thought that might be due to , problems with , the arithmetic of phases . they they add in this modulo two pi way there 's reason to believe that approach of taking the mean of the phase spectrum was n't really mathematically correct . so , what i did instead is i took the mean of the fft spectrum without taking the log or anything , and then i took the phase of that , and i subtracted that phase off to normalize . but that , did n't work either . professor b: see , we have a different interpretation of this . he says it does n't work . i said , it works magnificently , but just not for the task we intended . , it gets rid of the speech . professor b: , it leaves the junk . , it 's tremendous . you see , all he has to do is go back and reverse what he did before , and he 's really got something . professor b: ex - exactly . , you got it . so , it 's a general rule . professor b: just listen very carefully to what i say and do the opposite . including what said . phd c: . maybe , concerning these d still , these meeting digits . i ' m more interested in trying to figure out what 's still the difference between the sri system and the aurora system . so , i will maybe train , like , gender - dependent models , because this is also one big difference between the two systems . the other differences were the fact that maybe the acoustic models of the sri are more sri system are more complex . but , chuck , you did some experiments with this professor b: , it sounds like they also have he 's saying they have all these , different kinds of adaptation . , they have channel adaptation . they have speaker adaptation . phd a: like they do , i ' m not how they would do it when they 're working with the digits , phd a: but , like , in the switchboard data , there 's , conversation - side normalization for the non - c - zero components , phd c: . this is another difference . their normalization works like on the utterance levels . but we have to do it we have a system that does it on - line . phd c: so , it might be better with it might be worse if the channel is constant , or nnn . phd g: and the acoustic models are like - k triphone models or is it the whole word ? professor b: it 's probably more than that . , so they have i thin think they use these , genone things . so there 's these , pooled models and they can go out to all sorts of dependencies . professor b: they have tied states and i do n't real i ' m talk i ' m just guessing here . but i think they do n't just have triphones . they have a range of , dependencies . phd c: , the first thing i that i want to do is just maybe these gender things . and maybe see with andreas if , i how much it helps , what 's the model . phd a: so so the n on the numbers you got , the two point seven , is that using the same training data that the sri system used and got one point two ? grad e: for with the sri system , the aurora baseline is set up with these , this version of the clean training set that 's been filtered with this g - seven - one - two filter , to train the sri system on digits s - andreas used the original ti - digits , under u doctor - speech data ti - digits , which do n't have this filter . but i do n't think there 's any other difference . professor b: so is that ? , are these results comparable ? so you were getting with the , aurora baseline something like two point four percent on clean ti - digits , when , training the sri system with clean tr digits ti - digits . right ? professor b: and , so , is your two point seven comparable , where you 're , using , the submitted system ? professor b: you you were htk . that 's right . ok , so the comparable number then , for what you were talking about then , since it was htk , would be the , two point f grad e: do you mean the b ? the baseline aurora - two system , trained on ti - digits , tested on meeting recorder near , we saw in it today , and it was about six point six percent . phd c: they are helping . and another thing i maybe would like to do is to just test the sri system that 's trained on macrophone test it on , the noisy ti - digits , cuz i ' m still wondering where this improvement comes from . when you train on macrophone , it seems better on meeting digits . but i wonder if it 's just because maybe macrophone is acoustically closer to the meeting digits than ti - digit is , which is ti - digits are very clean recorded digits phd a: , it would also be interesting to see , to do the regular aurora test , phd c: that 's . that 's what i wanted , just , so , just using the sri system , test it on and test it on aurora ti - digits . phd a: , i the work would be into getting the files in the right formats , . right ? because when you train up the aurora system , you 're also training on all the data . , it 's professor b: that 's true , but that also when we ' ve had these meetings week after week , oftentimes people have not done the full arrange of things because on whatever it is they 're trying , because it 's a lot of work , even just with the htk . so , it 's a good idea , but it seems like it makes sense to do some pruning first with a test or two that makes sense for you , and then take the likely candidates and go further . phd c: but , just testing on ti - digits would already give us some information about what 's going on . , . ok . , the next thing is this vad problem that , so , i ' m just talking about the curves that i sent you so , whi that shows that when the snr decrease , the current vad approach does n't drop much frames for some particular noises , which might be then noises that are closer to speech , acoustically . professor b: i i just to clarify something for me . they were supp supposedly , in the next evaluation , they 're going to be supplying us with boundaries . so does any of this matter ? , other than our interest in it . phd c: first of all , the boundaries might be , like we would have t two hundred milliseconds or before and after speech . so removing more than that might still make a difference in the results . professor b: do we ? , is there some reason that we think that 's the case ? phd c: no . because we do n't did n't looked that much at that . but , still , it 's an interesting problem . professor b: but maybe we 'll get some insight on that when , the gang gets back from crete . because there 's lots of interesting problems , . and then if they really are going to have some means of giving us fairly tight , boundaries , then that wo n't be so much the issue . but i . phd g: because w we were wondering whether that vad is going to be , like , a realistic one or is it going to be some manual segmentation . and then , like , if that vad is going to be a realistic one , then we can actually use their markers to shift the point around , the way we want to find a , rather than keeping the twenty frames , we can actually move the marker to a point which we find more suitable for us . phd g: but if that is going to be something like a manual , segmenter , then we ca n't use that information anymore , because that 's not going to be the one that is used in the final evaluation . so . we what is the type of vad which they 're going to provide . phd c: and actually there 's there 's an , it 's still for even for the evaluation , it might still be interesting to work on this because the boundaries that they would provide is just , starting of speech and end of speech , at the utterance level . phd g: with some gap . , with some pauses in the center , provided they meet that whatever the hang - over time which they are talking . professor b: so if you could get at some of that , although that 'd be hard . phd c: it might be useful for , like , noise estimation , and a lot of other things that we want to work on . phd c: but so i did started to test putting together two vad which was not much work actually . i i m re - implemented a vad that 's very close to the , energy - based vad that , the other aurora guys use . so , which is just putting a threshold on the noise energy , and , detect detecting the first group of four frames that have a energy that 's above this threshold , and , from this point , tagging the frames there as speech . so it removes the first silent portion of each utterance . and it really removes it , still o on the noises where our mlp vad does n't work a lot . professor b: mmm . cuz i would have thought that having some spectral information , in the old days people would use energy and zero crossings , would give you some better performance . cuz you might have low - energy fricatives or , stop consonants , like that . professor b: , that if you d if you use purely energy and do n't look at anything spectral , then you do n't have a good way of distinguishing between low - energy speech components and nonspeech . and , just as a gross generalization , most nonsp many nonspeech noises have a low - pass characteristic , some slope . and and most , low - energy speech components that are unvoiced have a high - pass characteristic an upward slope . so having some a , at the beginning of a of an s sound , just starting in , it might be pretty low - energy , but it will tend to have this high - frequency component . whereas , a lot of rumble , and background noises , and will be predominantly low - frequency . , by itself it 's not enough to tell you , but it plus energy is it plus energy plus timing information is , if you look up in rabiner and schafer from like twenty - five years ago , that 's what they were using then . so it 's not a phd c: so , . it it might be that what i did is so , removes like low , low - energy , speech frames . because the way i do it is i just combine the two decisions so , the one from the mlp and the one from the energy - based with the and operator . so , i only keep the frames where the two agree that it 's speech . so if the energy - based dropped low - energy speech , mmm , they are lost . but s still , the way it 's done right now it helps on the noises where it seems to help on the noises where our vad was not very good . professor b: , i one could imagine combining them in different ways . i what you 're saying is that the mlp - based one has the spectral information . phd c: the way i use a an a " and " operator is so , it i , phd c: the frames that are dropped by the energy - based system are , dropped , even if the , mlp decides to keep them . professor b: but , i in principle what you 'd want to do is have a , a probability estimated by each one and put them together . phd a: something that i ' ve used in the past is , when just looking at the energy , is to look at the derivative . and you make your decision when the derivative is increasing for so many frames . then you say that 's beginning of speech . but , i ' m trying to remember if that requires that you keep some amount of speech in a buffer . i it depends on how you do it . but , that 's been a useful thing . phd g: . , every everywhere has a delay associated with it . , you still have to k always keep a buffer , then only make a decision because you still need to smooth the decision further . so that 's always there . phd c: , actually if i do n't maybe do n't want to work too much of on it right now . wanted to see if it 's what i observed was the re was caused by this vad problem . and it seems to be the case . , the second thing is the this spectral subtraction . which i ' ve just started yesterday to launch a bunch of , { nonvocalsound } twenty - five experiments , with different , values for the parameters that are used . it 's the makhoul - type spectral subtraction which use an over - estimation factor . so , we substr i subtract more , { nonvocalsound } noise than the noise spectra that is estimated on the noise portion of the s , the utterances . so i tried several , over - estimation factors . and after subtraction , i also add a constant noise , and i also try different , noise , values and we 'll see what happen . but st still when we look at the , it depends on the parameters that you use , but for moderate over - estimation factors and moderate noise level that you add , you st have a lot of musical noise . on the other hand , when you subtract more and when you add more noise , you get rid of this musical noise but maybe you distort a lot of speech . , it until now , it does n't seem to help . we 'll see . so the next thing , maybe i what i will try to do is just to try to smooth mmm , the , to smooth the d the result of the subtraction , to get rid of the musical noise , using some filter , phd g: can smooth the snr estimate , also . your filter is a function of snr . ? phd c: so , to get something that 's would be closer to what you tried to do with wiener filtering . phd g: so , u th i ' ve been playing with this wiener filter , like . and there are there were some bugs in the program , so i was p initially trying to clear them up . because one of the bug was i was assuming that always the vad , the initial frames were silence . it always started in the silence state , but it was n't for some utterances . so the it was n't estimating the noise initially , and then it never estimated , because i assumed that it was always silence . phd c: so this is on speechdat - car italian ? so , in some cases s there are also phd g: speechdat - car italian . there 're a few cases , actually , which i found later , that there are . phd g: so that was one of the bugs that was there in estimating the noise . and , so once it was cleared , i ran a few experiments with different ways of smoothing the estimated clean speech and how t estimated the noise and , smoothing the snr also . and so the trend seems to be like , smoothing the current estimate of the clean speech for deriving the snr , which is like deriving the wiener filter , seems to be helping . then updating it quite fast using a very small time constant . so we 'll have , like , a few results where the estimating the more smoothing is helping . but still it 's like it 's still comparable to the baseline . i have n't got anything beyond the baseline . but that 's , like , not using any wiener filter . and , so i ' m trying a few more experiments with different time constants for smoothing the noise spectrum , and smoothing the clean speech , and smoothing snr . so there are three time constants that i have . so , i ' m just playing around . so , one is fixed in the line , like smoothing the clean speech is helping , so i ' m not going to change it that much . but , the way i ' m estimating the noise and the way i ' m estimating the snr , i ' m just trying a little bit . so , that h and the other thing is , like , putting a floor on the , snr , because that if some in some cases the clean speech is , like when it 's estimated , it goes to very low values , so the snr is , like , very low . and so that actually creates a lot of variance in the low - energy region of the speech . so , i ' m thinking of , like , putting a floor also for the snr so that it does n't vary a lot in the low - energy regions . and , . so . the results are , like so far i ' ve been testing only with the baseline , which is which does n't have any lda filtering and on - line normalization . want to separate the contributions out . so it 's just vad , plus the wiener filter , plus the baseline system , which is , just the spectral , the mel sp mel , frequency coefficients . and the other thing that i tried was but took of those , carlos filters , which hynek had , to see whether it really h helps or not . , it was just a run to see whether it really degrades or it helps . it 's it seems to be like it 's not hurting a lot by just blindly picking up one filter which is nothing but a four hertz a band - pass m filter on the cubic root of the power spectrum . so , that was the filter that hy - , carlos had . so . just just to see whether it really it 's is it worth trying or not . so , it does n't seems to be degrading a lot on that . so there must be something that can be done with that type of noise compensation also , which i would ask carlos about that . , how he derived those filters and where d if he has any filters which are derived on ogi stories , added with some type of noise which what we are using currently , like that . so maybe i 'll professor b: so , if you have this band - pass filter , you probably get n you get negative values . phd g: and i ' m , like , floating it to z zeros right now . so it has , like the spectrogram has , like it actually , enhances the onset and offset of , the begin and the end of the speech . so it 's there seems to be , like , deep valleys in the begin and the end of , like , high - energy regions , because the filter has , like , a mexican - hat type structure . so , those are the regions where there are , like when i look at the spectrogram , there are those deep valleys on the begin and the end of the speech . but the rest of it seems to be , like , pretty . so . that 's something i observe using that filter . there are a few very not a lot of because the filter does n't have a really a deep negative portion , so that it 's not really creating a lot of negative values in the cubic root . i 'll s may continue with that for some w i 'll maybe i 'll ask carlos a little more about how to play with those filters , and but while making this wiener filter better . that that 's it , morgan . phd g: i would actually m did n't get enough time to work on the subspace last week . it was mostly about finding those bugs th , things , and i did n't work much on that . phd d: , i am still working with , vts . and , one of the things that last week , say here is that maybe the problem was with the diff because the signal have different level of energy . phd d: and , maybe , talking with stephane and with sunil , we decide that maybe it was interesting to apply on - line normalization before applying vts . but then we decided that 's it does n't work , because we modified also the noise . and , thinking about that , we then we decide that maybe is a good idea . we . i do n't hav i do n't this is i did n't do the experiment yet to apply vts in cepstral domain . professor b: the other thing is so so , in i not and c - zero would be a different so you could do a different normalization for c - zero than for other things anyway . , the other thing i was gon na suggest is that you could have two kinds of normalization with , different time constants . so , you could do some normalization s , before the vts , and then do some other normalization after . but but c - zero certainly acts differently than the others do , so that 's phd d: , we s decide to m to obtain the new expression if we work in the cepstral domain . i am working in that now , but i ' m not if that will be usefu useful . i . it 's k it 's k it 's quite a lot it 's a lot of work . , it 's not too much , but this it 's work . and i want to know if we have some feeling that the result i would like to know if i do n't have any feeling if this will work better than apply vts aft in cepstral domain will work better than apply in m mel in filter bank domain . i r i ' m not . i do n't i nothing . professor b: . , you 're the first one here to work with vts , so , maybe we could call someone else up who has , ask them their opinion . i do n't have a good feeling for it . phd c: actually , the vts that you tested before was in the log domain and so the codebook is e dependent on the level of the speech signal . phd c: so i expect it if if you have something that 's independent of this , i expect it to , be a better model of speech . professor b: you you would n't even need to switch to cepstra . , you can just normalize the professor b: and then you have one number which is very dependent on the level cuz it is the level , phd c: but here also we would have to be careful about removing the mean of speech not of noise . phd d: we i was thinking to estimate the noise with the first frames and then apply the vad , before the on - line normalization . we we see , i am thinking about that and working about that , but i do n't have result this week . professor b: , one of the things we ' ve talked about maybe it might be star time to start thinking about pretty soon , is as we look at the pros and cons of these different methods , how do they fit in with one another ? because we ' ve talked about potentially doing some combination of a couple of them . maybe maybe pretty soon we 'll have some sense of what their characteristics are , so we can see what should be combined . ###summary: a typical progress report meeting for the icsi meeting recorder group at berkeley. each of the group reported their most recent progress , and any results they have achieved. this then prompted discussion about the reasons behind such findings , which were for the most part not as expected. topics the group touched upon included spectral subtraction , phase normalization , voice activity detection , along with comparisons between systems. *na* a couple of issues have arisen that need to be looked into further such as dc-offset and effects of pzm signals. speaker fn002 is worried about running vts in the cepstral domain , because it requires a lot of work , and it is not clear that it will be much better than running it in the mel domain. similarly , since at the next stage of the project data will have marked boundaries , it is not clear that voice-activity detection is worth pursuing. speaker me026 has been trying mean subtraction on the sri system , with good improvement for the far mike , though worse on near mike. this contradicts previous findings for htk , though he has some theories to explain the difference. has also been working on different phase normalization techniques , with no luck. speaker mn007 has also been looking at differences , differences between the sri system and the groups aurora project , and again , there are a number of possible explanations. he has also been looking at a problem with the vad and snr. mn052 has sorted bugs in implementation of wiener filtering , and has been investigating smoothing and also snr. fn002 is ready to run experiment investigating vts in the cepstral domain. speaker me006 is still working on his proposal.
7
professor b: ok , so i i whether ami 's coming or not but we oughta just get started . professor b: , so there you go . anyway , so my idea f for today and we can decide that is n't the right thing to do was to at spend at least part of the time trying to build the influence links , which sets of things are relevant to which decisions and actually i had specific s suggestion to start first with the path ones . the database ones being in some sense less interesting to us although probably have to be done and so to do that so there 's and the idea was we were gon na do two things professor b: right , . we were gon na do two things one of which is just lay out the influence structure of what we think influences what professor b: and then as a separate but related task particularly bhaskara and i were going to try to decide what kinds of belief nodes are needed in order to do what we need to do . once so but du we should have all of the basic design of what influences what done before we decide exactly how to compute it . so i did n't did you get a chance to look yet ? professor b: great . ok so let 's start with the belief - nets , the general influence and then we 'll also at some point break and talk about the techy . grad e: one could go there 's we can di discuss everything . first of all this i added , i knew from this has to be there right ? grad e: given given not transverse the castle , the decision is does the person want to go there or is it just professor b: right , true . does have to be there . and i ' m we 'll find more as we go that grad e: and ? so go - there in the first place or not is definitely one of the basic ones . we can start with that . interesting effect . is this true or false or maybe we 'll get professor b: when we 're when we 're done . so so the reason it might not be true or false is that we did have this idea of when so it 's , current @ and so on or not , professor b: right ? and so that a decision would be do we want that so you could two different things you could do , you could have all those values for go - there or you could have go - there be binary and given that you 're going there when . grad a: it seems that you could it seems that those things would be logically independent like you would wanna have them separate or binary , go - there and then the possibilities of how to go there because grad a: because , it might be easy to figure out that this person is going to need more film eventually from their utterance but it 's much more complex to query when would be the most appropriate time . grad e: and so i ' ve tried to come up with some initial things one could observe so who is the user ? everything that has user comes from the user model everything that has situation comes from the situation model - a . we should be clear . but when it comes to writing down when you do these things is it here ? you have to a write the values this can take . grad e: and here i was really in some s sometimes i was really standing in front of a wall feeling very stupid because this case it 's pretty simple , but as we will see the other ones if it 's a running budget so what are the discrete values of a running budget ? so maybe my understanding there is too impoverished . grad e: how can i write here that this is something , a number that cr keeps on changing ? but ok . thus is understandable ? professor b: you ' ve s have you seen this before keith , these belief - net things ? grad e: so here is the we had that the user 's budget may influence the outcome of decisions . there we wanted to keep a running total of things . grad d: is this like a number that represents how much money they have left to spend ? ok , h how is it different from user finance ? grad e: the finance is here thought of as the financial policy a person carries out in his life , he is he cheap , average , or spendy ? grad e: and i did n't come maybe a user i , i did n't want to write greediness , but professor b: . so keith w what 's behind this is actually a program that will once you fill all this in actually s solve your belief - nets for you and . professor b: so this is not just a display , this is actually a gui to a simulator that will if we tell it all the right things we 'll wind up with a functioning belief - net at the other end . grad e: ok , so here was ok , think of people being cheap , average , or spendy or we can even have a finer scale moderately cheap , grad e: does n't matter . agree there but here i was n't what to write in . grad d: , you ' ve written in what seems to be required like what else is do you want ? professor b: . so here 's what 's permissible is that you can arrange so that the value of that is gon na have to be updated and n it 's not a belief update , it 's you took some actions , you spent money and , so the update of that is gon na have to be essentially external to the belief - net . right ? and then what you 're going to need is for the things that it influences . let 's first of all let 's see if it does influence anything . and if it does influence anything then you 're gon na need something that converts from the number here to something that 's relevant to the decision there . so it could be ra they create different ranges that are relevant for different decisions or whatever but for the moment this is just a node that is conditioned externally and might influence various things . grad d: the other thing is that every time that 's updated beliefs will have to be propagated but then the question is do you do we wanna propagate beliefs every single time it 's updated or only when we need to ? professor b: , that 's a good question . and does it have a lazy mode ? i do n't remember . grad d: , in srini 's thing there was this option like proper inferences which suggests that does n't happen , automatically . professor b: right . s probably does . someone has to track that down , but i but and and actually professor b: one of the we w items for the user home base should be essentially non - local . i they 're only there for the day and they do n't have a place that they 're staying . grad e: just accidentally erased this , had values here such as is he s we had in our list we had " is he staying in our hotel ? " , is he staying with friends ? , and so we 're professor b: so it 's clear where w where we are right now . so my suggestion is we just pick professor b: one , one particular one of the let 's do the first one let 's do the one that we already think we did so w that was the of the endpoint ? professor b: no , he has he has n't filled them in yet , is what 's true . grad e: did i or did n't i ? . probably nothing done yet , did it on the upper ones , ok . makes sense . ok , so this was eva . maybe we can think of more things , cross professor b: , would be a f for a given segment . , you y you go first go the town square grad a: no , if you go to re if you go to prague or whatever one of your key points that you have to do is cross the charles bridge and does n't really matter which way you cross which where you end up at the end but the part the good part is walking over it , so . professor b: that 's subtle , but true . so let 's just leave it three with three for now professor b: and let 's see if we can get it linked up just to get ourselves started . professor b: you 'll see it you 'll see something comes up immediately , that the reason i wanna do this . grad e: w the user was definitely more likely to enter if he 's a local more likely to view if he 's a tourist and then we had the fact that given the fact that he 's thrifty and there will be admission then we get all these cross professor b: we did , but the three things w that it contributed to this , the other two are n't up there . so one was the ontology professor b: ok , so this is w right , so what w i what we seem to need here , this is why it starts getting into the technical professor b: the way we had been designing this , there were three intermediate nodes which were the endpoint decision as seen from the user model as seen from the ontology and as seen from the discourse . so each of those the way we had it designed , now we can change the design , but the design we had was there was a decision with the same three outcomes based on the th those three separate considerations so if we wanted to do that would have to put in three intermediate nodes professor b: and then what you and i have to talk about is , ok if we 're doing that and they get combined somehow how do they get combined ? but the they 're undoubtedly gon na be more things to worry about . grad e: so that 's w in our in johno 's pictogram everything that could contribute to whether a person wants to enter , view , or approach something . professor b: , it was called mode , so this is m mode here means the same as endpoint . professor b: alright . , but that was actually , unfortunately that was a an intermediate versio that 's i do n't think what we would currently do . grad a: can i ask about " slurred " and " angry " as inputs to this ? grad c: if the if the person talking is angry or slurs their speech they might be tired or , professor b: but that 's - that seems to , so so my advice to do is get this down to what we think is actually likely to be a strong influence . but , that was what he had in mind . professor b: so let 's think about this question of how do we wanna handle so there 're two separate things . one is at least two . one is how do we want to handle the notion of the ontology now what we talked about , and this is another technical thing bhaskara , is can we arrange so that we can so that the belief - net itself has properties and the properties are filled in from on ontology items . so the let 's take the case of the this endpoint thing , the notion was that if you had a few key properties like is this a tourist site , some landmark is it a place of business is it something you physically could enter ok , et cetera . so that there 'd be certain properties that would fit into the decision node and then again as part of the ou outer controlling conditioning of this thing those would be set , so that some somehow someone would find this word , look it up in the ontology , pull out these properties , put it into the belief - net , and then the decision would flow . now grad e: seems to me that we ' ve e embedded a lot , embedded a lot of these things we had in there previously in some of the other final decisions done here , if we would know that this thing is exhibiting something if it 's exhibiting itself it is a landmark , meaning more likely to be viewed if it is exhibiting pictures or sculptures and like this , then it 's more likely to be entered . professor b: i that 's completely right and that 's good , so what that says is that we might be able to take and in particular so the ones we talked about were exhibiting and selling grad e: if it 's closed one probably wo n't enter . or if it 's not accessible to a tourist ever the likelihood of that person actually wanting to enter it , given that he knows it , . professor b: so let me suggest this . w could you move those up about halfway . the ones that you th and selling i . professor b: so here 's what it looks like to me . is that you want an intermediate structure which i is essentially the or of for this purpose of selling , f fixing , or servicing . so that it that is , for certain purposes , it becomes important but for this purpose one of these places is quite like the other . does that seem right ? so we di professor b: if we yes . so if it may be more than endpoint decisions , so the idea would be that you might wanna merge those three professor b: it i here 's where it gets a little tricky . from the belief - net point of view it is from another point of view it 's interest it 's important to it 's selling or servicing and . so for this decision it 's just true or false and in th this is a case where the or seems just what you want . that that if any of those things is true then it 's the place that you professor b: you could , . , so let 's do that . no no , no to an inter no , an intermediate node . grad e: so are they the is it the object that sells , fixes , or services things ? professor b: say w it 's co i would s a again for this purpose it 's commercial . someplace you want to go in to do some business . grad d: what does the underscore - t at the end of each of those things signify ? grad e: things . so places that service things sell things or fix things and pe places that e exhibit things . grad a: so we 're deriving this the this feature of whether the main action at this place happens inside or outside or what we 're deriving that from what activity is done there ? could n't you have it as just a primitive feature of the entity ? grad a: it seems like that 's much more reliable cuz you could have outdoor places that sell things and indoor places that do something else professor b: , the problem with it is that it putting in a feature just for one decision , professor b: now w we may wind up having to do that this i anyway , this i at a mental level that 's what we 're gon na have to sort out . so , what does this look like , what are intermediate things that are worth computing , what are the features we need in order to make all these decisions and what 's the best way to organize this so that it 's clean and consistent and all that . grad a: i ' m just thinking about how people , human beings who know about places and places to go and so on would store this and it would probably you would n't just remember that they sell and then deduce from that it must be going on inside . grad e: an entity maybe should be regard as a vector of several possible things , it can either do s do sell things , fix things , service things , exhibit things , it can be a landmark at the same time as doing these things , it 's not either or mmm certainly a place can be a hotel and a famous site . many come to mind . things can be generally a landmark and be accessible . ie a castle or can be a landmark a or not accessible , some statue can go inside . professor b: anyway so let me suggest you do something else . which is to get rid of that l long link between who the user and the endpoint . professor b: no no , i do n't want the link there . because what we 're gon na want is an intermediate thing which is the endpoint decisi the endpoint decision based o on the user models , so what we talked about is three separate endpoint decisions , so let 's make a new node grad c: just as a suggestion maybe you could " save as " to keep your old one and clean and so you can mess with this one . professor b: let 's say underbar - u , so that 's the endpoint decision as seen through the professor b: underscore - e for entity , and we may change all this , but . and grad e: ok , should n't i be able to move them all ? no . or ? can i ? where ? what ? professor b: i d i . actually , i the easiest thing would move mo move the endpoint , go ahead . just do whatever . professor b: and maybe th maybe it 's just one who is the user , i , maybe there 's more . grad e: if he 's usi if he 's in a car right now what was that people with harry drove the car into the cafe professor b: never mind . anyway , this is crude . now but the now so but then the question is so and we assume that some of these properties would come indirectly through an ontology , but then we had this third idea of input from the discourse . professor b: , maybe , i again , i d , ok , put in but what we 're gon na wanna do is actually grad e: here this was one of my problems we have the user interest is a vector of five hundred values , so that 's from the user model , grad d: right . so why is it , so it 's like a vector of five hundred one 's or zero 's ? professor b: so you cou and so here let me give you two ways to handle that . alright ? one is you could ignore it . but the other thing you could do is have an and this will give you the flavor of the of what you could have a node that 's that was a measure of the match between the object 's feature , the match between the object the entity , i ' m and the user . so you could have a k a " fit " node that would have to be computed by someone else but so that grad a: user schedule . do i have time to go in and climb all the way to the top of the koelner dome or do have to " time to take a picture of the outside ? " professor b: that 's what we do n't wanna do , see that se cuz then we get into huge combinatorics and like that an grad c: cuz if the , and if the user is tired , the user state , right , it would affect , but i ca n't see why e anything w everything in the model would n't be professor b: , but , that 's we ca n't do that , so we 're gon na have to but this is a good discussion , we 're gon na have to somehow figure out some way to encapsulate that so if there 's some general notion of the relation to the time to do this to the amount of time the guy has like that is the compatibility with his current state , so that 's what you 'd have to do , you 'd have to get it down to something which was itself relatively compact , so it could be compatibility with his current state which would include his money and his time and his energy grad d: no but , it 's more than that , like the more you break it up like because if you have everything pointing to one node it 's like exponential whereas if you like keep breaking it up more and more it 's not exponential anymore . professor b: so it , there are two advantages . that 's tha there 's one technical one grad c: s so we 'd be doing subgrouping ? subgrouping , into mo so make it more tree like going backwards ? professor b: but it there 's two advantages , one is the technical one that you do n't wind up with such big exponential cbt 's , professor b: the other is it can be it presumably can be used for multiple decisions . so that if you have this idea of the compatibility with the requirements of an action to the state of the user one could imagine that was u not only is it sim is it cleaner to compute it separately but it could be that it 's used in multiple places . anyway th so in general this is the design , this is really design problem . ok , you ' ve got a signal , a d set of decisions how do we do this ? grad e: what do i have under user state anyhow cuz i named that already something . that 's tired , fresh , maybe should be renamed into physical state . grad c: i the question is it 's hard for me to imagine how everything would n't just contribute to user state again . or user compatibility . grad e: the user interests and the user who the user is are completely apart from the fact whether he is tired broke grad c: , but other though the node we 're creating right now is user compatibility to the current action , right ? grad c: seems like everything in the user model would contribute to whether or not the user was compatible with something . professor b: maybe not . the that 's the issue is would even if it was true in some abstract general sense it might not be true in terms of the information we actually had and can make use of . and anyway we 're gon na have to find some way to cl get this sufficiently simple to make it feasible . grad e: maybe if we look at the if we split it up again into if we look at the endpoint again we said that for each of these things there are certain preconditions so you can only enter a place if you are not too tired to do so and also have the money to do so if it costs something so if you can afford it and perform it is preconditions . viewing usually is cheap or free . is that always true ? i . professor b: w w but that viewing it without ent view w with our definition of view it 's free cuz you grad a: what about the grand canyon , right ? no , never mind . are there large things that you would have to pay to get up close to like , never mind , not in the current professor b: no we have to enter the park . almost by definition paying involves entering , ge going through some professor b: so let me suggest we switch to another one , clearly there 's more work to be done on this but it 's gon na be more instructive to think about other decisions that we need to make in path land . and what they 're gon na look like . grad c: so you can save this one as and open up the old one , right and then everything would be clean . you could do it again . professor b: why , it 's worth saving this one but i 'd like to keep this one cuz i wanna see if we 're gon na reuse any of this . professor b: you tell me , so in terms of the planner what 's a good one to do ? grad e: let 's th this go there or not is a good one . is a very basic one . so what makes things more likely that professor b: the fir see the first thing is , getting back to thing we left out of the other is the actual discourse . so keith this is gon na get into your world because we 're gon na want to know , which constructions indicate various of these properties s and so i do n't yet know how to do this , i we 're gon na wind up pulling out discourse properties like we have object properties and we what they are yet . so that the go - there decision will have a node from discourse , and i why do n't we just stick a discourse thing up there to be as a placeholder for grad e: identified that and so again re that 's completely correct , we have the user model , the situation model here , we do n't have the discourse model here yet . much the same way as we did n't we do n't have the ontology here . professor b: the ontology we said we would pull these various kinds of properties from the ontology like exhibiting , selling , and . professor b: so in some sense it 's there . but the discourse we do n't have it represented yet . grad e: this be specific for second year ? and and we probably will have something like a discourse for endpoint . professor b: but if we do it 'll have the three values . it 'll have the eva values if we have it . professor b: for go - there , probably is true and false , let 's say . that 's what we talked about . grad e: , we 're looking at the little data that we have , so people say how do i get to the castle and this usually means they wanna go there . so this should push it in one direction however people also sometimes say how do i get there in order to find out how to get there without wanting to go there . grad e: and sometimes people say where is it because they wanna know where it is but in most cases they probably professor b: , but that does n't change the fact that you 're you want these two values . grad e: true . so this is some external thing that takes all the discourse and then says here it 's either , yay , a , or nay . ok ? professor b: and they 'll be a y , a user go - there and maybe that 's all , i . grad d: that definitely interes but that now that what 's the word the that interacts with the eva thing if they just wanna view it then it 's fine to go there when it 's closed whereas if they want to so professor b: right , so that 's where it starts getting to be essentially more interesting , so what bhaskara says which is completely right is if that they 're only going to view it then it does n't matter whether it 's closed or not in terms of , whether you wanna go there . grad d: , that 's what i said just having one situational node may not be enough because this that node by itself would n't distinguish professor b: i it can have di various values . , but we you 're right it might not be enough . grad d: , see i ' m thinking that any node that begins with " go - there " is either gon na be true or false . grad a: also , that node , the go - there s s node would just be fed by separate ones for , there 's different things , the strikes and the professor b: . so so now the other thing that bhaskara pointed out is what this says is that there sh should be a link , and this is where things are gon na get very messy from the endpoint decision professor b: maybe the t they 're final re and , i the very bottom endpoint decision to the go - there node . and i do n't worry about layout , then we 'll go nuts grad d: mmm . maybe we could have intermediate node that just the endpoint and the go - there s node fed into ? professor b: the go - there , actually the endpoint node could feed into the go - there s that 's right , professor b: so the endpoint node , make that up t to the go - there then we 'll have to do layout at some point , but something like that . now it 's gon na be important not to have loops . really important in the belief worl net world not to have loops professor b: no it 's much worse than that . it if i loo it 's not def i it 's not defined if you 're there are loops , professor b: you just you have to there are all sorts of ways of breaking it up so that there is n't grad e: but this is n't , this is this line is just coming from over here . professor b: , no it 's not a loop yet , i ' m just saying we , in no , in grad d: , but the good thing is we could have loopy belief propagation which we all love . professor b: ok , so anyway , so that 's another decision . what 's another decision you like ? grad e: ok , these have no parents yet , but i that does n't matter . right ? professor b: , the idea is that you go there , you go comes from something about the user from something about the situation and the discourse is a mystery . grad e: this is this comes from traffic and , . sh - should we just make some grad e: if there 's parking maybe mmm who cares . and if he has seen it already or not and , and discourse is something that should we make a keith note here ? that comes from keith . just so we do n't forget . have to get used to this . ok , whoops . professor b: and then also the discourse endpoint , i endpoint sub - d is if you wanna make it consistent . grad a: actually is this the right way to have it where go there from the user and go there from the situation just about each other but they both feed the go there decision because is n't the , grad a: , but that still allows for the possibility of the user model affecting our decision about whether a strike is the thing which is going to keep this user away from grad a: if you needed it to do that . but ok i was just thinking i maybe i ' m conflating that user node with possible asking of the user hey there 's a strike on , does that affect whether or not you wanna go grad a: or , so that might not come out of a user model but , directly out of interaction . professor b: i gu yes my curr , do n't that 's enough . my current idea on that would be that each of these decision nodes has questions associated with it . and the question would n't itself be one of these conditional things , given that there 's a strike do you still wanna go ? but if you told him a bunch of , then you would ask him do you wanna go ? but trying to formulate the conditional question , that sounds too much . professor b: because i want to do a little bit of organization . before we get more into details . the organization is going to be that the flavor of what 's going on is going to be that as we s e going to this detail keith is going to worry about the various constructions that people might use and johno has committed himself to being the parser wizard , professor b: so what 's going to happen is that eventually like by the time he graduates , ok they 'll be some system which is able to take the discourse in context and have outputs that can feed the rest of belief - net . i j wa i assume everybody knows that , wanna , get closure that 'll be the game then , so the semantics that you 'll get out of the discourse will be of values that go into the various discourse - based decision nodes . and now some of those will get fancier like mode of transportation and so it is n't by any means necessarily a simple thing that you want out . so if there is an and there is mode of transportation grad e: and it there 's a also a split if you loo if you blow this up and look at it in more detail there 's something that comes from the discourse in terms of what was actually just said what 's the utterance go giving us and then what 's the discourse history give us . professor b: , that , we 'll have to decide how much of th where that goes . professor b: an and it 's not clear yet . it could be those are two separate things , it could be that the discourse gadget itself integrates as which would be my that you 'd have to do see in order to do reference and like that you ' ve got ta have both the current discourse and the context to say i wanna go back there , what does that mean grad e: but is th is this picture that 's emerging here just my wish that you have noticed already for symmetry or is it that we get for each decision on the very bottom we get the sub - e , sub - d , sub - u and maybe a sub - o " for " ontology " meta node but it might just professor b: which is s if that 's true how do we wanna combine those ? o or when it 's true ? grad e: but this w wou would be though that , we only have at most four at the moment arrows going f to each of the bottom decisions . and four you we can handle . professor b: i it see i if it 's fou if it 's four things and each of them has four values it turns out to be a big cpt , it 's not s completely impossi it 's not beyond what the system could solve but it 's probably beyond what we could actually write down . or learn . professor b: it 's and i do n't think it 's gon na g e i do n't think it 'll get worse than that , so le that 's a good grad e: but but four did n't we decide that all of these had true or false ? so is it 's four professor b: for go there , but not f but not for the other one 's three values for endpoint already . grad d: , you need actually three to the five because if it has four inputs and then it itself has three values it can get big fast . professor b: each so you 're from each point of view you 're making the same decision . so from the point of view of the ob of the entity grad d: this and also , the other places where , like consider endpoint view , it has inputs coming from user budget , user thrift so even professor b: those are not necessarily binary . s so we 're gon na have to use some t care in the knowledge engineering to not have this explode . and it does n't in the sense that read it , actually with the underlying semantics and it is n't like you have two hundred and fifty - six different ways of thinking about whether this user wants to go to some place . so we just have to figure out what the regularities are and code them . but what i was gon na suggest next is maybe we wanna work on this a little longer but i do want to also talk about the thing that we started into now of it 's all fine to say all these arrows come into the si same place what rule of combination is used there . so th yes they so these things all affect it , how do they affect it ? and belief - nets have their own beliefs about what are good ways to do that . so is it 's clearer n clear enough what the issue is , so do we wanna switch that now or we wanna do some more of this ? grad e: r w we just need to in order to get some closure on this figure out how we 're gon na get this picture completely messy . professor b: , here he here 's one of the things that i th you sh you no , i how easy it is to do this in the interface but you it would be great if you could actually just display at a given time all the things that you pick up , you click on " endpoint " , and everything else fades and you just see the links that are relevant to that . and i does anybody remember the gui on this ? grad c: d i would almost say the other way to do that would be to open u or make n - many belief - nets and then open them every time you wanted to look at a different one vers cuz grad e: i have each of these thing each of the end belief - nets be a page and then you click on the thing and then li consider that it 's respective , professor b: the b anyway so it clear that even with this if we put in all the arrows nobody is gon na be able to read the diagram . alright , so e we have to figure out some display hack to do this because anyway i let me consi suggest that 's a s not a first - order consideration , we have two first - order considerations which is what are the influences a , and b how do they get combined mathematically , how do we display them is an issue , grad c: i do n't , do n't think this has been designed to support something like that . grad d: , it might soon , if this is gon na be used in a serious way like java base then it might soon be necessary to start modifying it for our purposes . professor b: , and i that seems like a perfectly feasible thing to get into , but we have to we want first . ok , so why do n't you tell us a little bit about decision nodes and what the choices might be for these ? grad d: i this board works fine . so recall the basic problem which is that you have a belief - net and you have like a lot of different nodes all contributing to one node . right ? so as we discussed specifying this thing is a big pain and it 's so will take a long time to write down because if these s have three possibilities each and this has three possibilities then you have two hundred and forty - three possibilities which is already a lot of numbers to write down . so what helps us in our situation is that these all have values in the same set , right ? these are all like saying ev or a , so it 's not just a generalized situation like we wanna just take a combination of we wanna view each of these as experts ea who are each of them is making a decision based on some factors and we wanna combine their decisions and create , sorta weighted combination . grad d: so the problem is to specify the so the conditional property of this given all those , that 's the way belief - nets are defined , like each node given its parents , so that 's what we want , we want p of let 's call this guy y and let 's call these x - one , x - two xn , so we want probability that y equals , e given that these guys are i 'll just refer to this as like x hat , the co like all of them ? given that the data says , a , v , a , e , so we would like to do this combination . professor b: alright , so is that i , wanna make everybody is with us before he goes on . grad d: so , right . so what we do n't wanna do is to for every single combination of e and v and a and every single letter e , s give a number because that 's not desirable . what we wanna do is find some principled way of saying what each of these is and we want it to be a valid probability distribution , so we want it to add up to one , so those are the two things that we need . so what i , what jerry suggested earlier was that we , view these guys as voting and we just take the we essentially take averages , so here two people have voted for a , one has voted for v , and one has voted for e , so we could say that the probabilities are , probability of being e is one over four , because one person voted for e out of four and similarly , probability of so this is probability of e s and then probability of a given all that is two out of four and probability of v is one out of four . so that 's step that 's the that 's the basic thing . now is that all ok ? grad e: it 's x - one voted for a x - two voted for v and ? professor b: y right . s so this assumes symmetry and equal weights and all this things , which may or may not be a good assumption , grad d: so step two is so we ' ve assumed equal weights whereas it might turn out that , some w be that , what the actual the verbal content of what the person said , like what might be somehow more important than the grad d: , so we do n't wanna like give them all equal weight so currently we ' ve been giving them all weight one fourth so we could replace this by w - one , w - two , w - three , and w - four and in order for this to be a valid probability distribution for each x - hat , we just need that the w 's sum to one . so they can be , you could have point one , point three , point two , and point four , say . grad c: so i jus just to make i understand this , so in this case we would still compute the average ? grad c: ok , so it 'd be so in this case the probability that y equals a would be w one times grad d: so these numbers have been replaced with point one , point three , point two , and point four . so you can view these as gone . probability of so , alright . so this is step two . so the next possibility is that we ' ve given just a single weight to each expert , right , whereas it might be the case that in certain situations one of the experts is more reliable and in certain situations the other expert is more reliable . so the way this is handled is by what 's called a mixture of experts , so what you can have is you augment these diagrams like this you have a new thing called " h " , ok ? this is a hidden variable . and what this is it gets its input from x - one , x - two , x - three , and x - four , and what it does is it decides which of the experts is to be trusted in this particular situation . and then these guys all come here . so this is sightly more complicated . so what 's going on is that this h node looks at these four values of those guys and it decides in given these values which of these is n't likely to be more reliable or most reliable . so h produces some , it produces a number , either one , two , three , or four , in our situation , now this guy he looks at the value of h say it 's two , and then he just selects the thing . that 's all there is to say , i about it . right , so you can have a mixture that grad a: so so the function of the thing that comes out of h is very different from the function of the other inputs . it 's driving how the other four are interpreted . grad c: it 's to tell the bottom node which one of the situations that it 's in or which one of the weighting systems grad c: w i was just , if you wanted to pay attention to more than one you could pass a w a weighting s system though too , could n't you ? grad a: does h have to have another input to tell it alpha , beta , whatever , or is the that 's determined by what the experts are saying , like the type of situ it it just seems that like without that outside input that you ' ve got a situation where , like if x - one says no , a low value coming out of x - on or i if x - one says no then ignore x - one , that seems like that 'd be weird , grad d: , could be things like if x - two and x - three say yes then i ignore x - one also . grad c: the situations that h has , are they built into the net or ok , so they could either be hand coded or learned or based on training data , so you specify one of these things for every one of those possi possible situations . grad d: , to learn them we need data , where are we gon na get data ? we need data with people intentions , which is slightly tricky . but what 's the data about like , are we able to get these nodes from the data ? grad a: like how thrifty the user is , or do we have access to that ? right . good . grad d: , but that 's my question , like how do we , how do we have data about something like endpoint sub - e , or endpoint sub s s ? grad c: , you would say , based on in this dialogue that we have which one of the things that they said whether it was the entity relations or whatever was the thing that determined what mode it was , grad d: so this is what we wanna learn . i do n't think , you have a can you bring up the function thing ? w where is the thing that allows you to grad d: function properties , is that it ? , i not . , that 's and it so e either it 'll allow us to do everything which is unlikely , more likely it 'll allow us to do very few of these things and in that case we 'll have to just write up little things that allow you to create such cpu 's on your own in the java base format . , i was assuming that 's what we 'd always do because i was assuming that 's what we 'd always do , it 's grad c: in terms of java base it 's what you see is what you get in i do n't i would be surprised if it supports anything more than what we have right here . grad a: just talking about that general end of things is there gon na be data soon from what people say when they 're interacting with the system and so on ? like , what questions are being given being asked ? cuz fey , you mean . o ok . i ' m just wondering , because in terms of , w the figure i was thinking about this figure that we talked about , fifty constructions or whatever that 's a whole lot of constructions and , one might be f fairly pleased with getting a really good analysis of five maybe ten in a summer so , i know we 're going for a rough and ready . , i was talking about the , if you wanted to do it really in detail and we do n't really need all the detail for what we 're doing right now but anyway in terms of just narrowing that task which fifty do i do , i wanna see what people are using , so , it will inspire me . , . touche . good enough .
a detailed diagram of the belief-net had already been disseminated. its structure was discussed during the meeting. there are several endpoints ( user , ontology , discourse etc ) with separate eva ( enter/view/approach ) values. details of how different inputs feed into them were discussed at length. ideas mentioned included grouping features of buildings like "selling" , "fixing" and "exhibiting" , as well as creating a user-compatibility node that would take different values depending on the situation and the user status. similarly , a go-there ( towards a building ) node can be influenced by things like the user's budget and discourse parameters amongst other things. the latter are still ill-defined at this stage. the study of the linguistic constructions that people use in this kind of navigational domain is expected to be prove useful in that respect. as each node in the tree is the decision point of the combination of its parent nodes , which rules govern this combination is an important issue. there are several approaches ranging from simply averaging the inputs to using a hidden variable in order to weight them differently depending on context. if the latter architecture is used , the net could -to an extent- be trained with the data that is currently being collected. although this was mainly a brainstorming meeting , some minor tasks were allocated for the near future. since the net architecture and possible decision algorithms were discussed , it is necessary to examine how much of this javabayes can accommodate and , if not , what modifications would be necessary. additionally , the german partners visiting the institute will need to see some results of the new system design. finally , the analysis of the linguistic constructions for the current research domain can begin even with limited data , as , at this stage , they need not be very detailed. the is only a diagrammatic view of how the decision tree for the eva task looks like. a lot of the details have been glossed over: the user model can potentially comprise a huge number of factors; a planning "go-there" node needs input from several other areas of the net; there are intricate interactions between discourse and the situation model. similarly , what discourse properties are of importance and how they influence eva probabilities is still a mystery. on a more general note , there is also the question of whether the net should be updated continuously or only when it is needed. no final decision was taken as to the rules of computation applying in the belief-net. the more interesting solutions would ideally require training data , and it is still debatable whether the current collection would be appropriate for this particular task. in any case , how different architectures can be implemented in javabayes and what modifications would be necessary for the purposes of this project also need to be investigated. a simulator of the set of influence links forming the belief-net was created and put up for discussion. different sections of the analysis , such as the user model , the ontology and the discourse are represented as a layer of nodes each with its own eva probabilities. they form endpoints to which other nodes like go-there , user_budget , user_thrift and prosody feed into. the second presentation concerned the set of computational rules that are to be used with the net. the simple way to decide on the final output is the majority vote ( which e , v or a form the majority of the parent nodes' outputs ). this assumes that all inputs are of equal importance. alternatively , each input can be weighted in a fixed way. a third option is to create a hidden variable that makes the decision of which of the inputs is more trusted in a particular situation. the same variable can potentially also change the weighting of each input.
###dialogue: professor b: ok , so i i whether ami 's coming or not but we oughta just get started . professor b: , so there you go . anyway , so my idea f for today and we can decide that is n't the right thing to do was to at spend at least part of the time trying to build the influence links , which sets of things are relevant to which decisions and actually i had specific s suggestion to start first with the path ones . the database ones being in some sense less interesting to us although probably have to be done and so to do that so there 's and the idea was we were gon na do two things professor b: right , . we were gon na do two things one of which is just lay out the influence structure of what we think influences what professor b: and then as a separate but related task particularly bhaskara and i were going to try to decide what kinds of belief nodes are needed in order to do what we need to do . once so but du we should have all of the basic design of what influences what done before we decide exactly how to compute it . so i did n't did you get a chance to look yet ? professor b: great . ok so let 's start with the belief - nets , the general influence and then we 'll also at some point break and talk about the techy . grad e: one could go there 's we can di discuss everything . first of all this i added , i knew from this has to be there right ? grad e: given given not transverse the castle , the decision is does the person want to go there or is it just professor b: right , true . does have to be there . and i ' m we 'll find more as we go that grad e: and ? so go - there in the first place or not is definitely one of the basic ones . we can start with that . interesting effect . is this true or false or maybe we 'll get professor b: when we 're when we 're done . so so the reason it might not be true or false is that we did have this idea of when so it 's , current @ and so on or not , professor b: right ? and so that a decision would be do we want that so you could two different things you could do , you could have all those values for go - there or you could have go - there be binary and given that you 're going there when . grad a: it seems that you could it seems that those things would be logically independent like you would wanna have them separate or binary , go - there and then the possibilities of how to go there because grad a: because , it might be easy to figure out that this person is going to need more film eventually from their utterance but it 's much more complex to query when would be the most appropriate time . grad e: and so i ' ve tried to come up with some initial things one could observe so who is the user ? everything that has user comes from the user model everything that has situation comes from the situation model - a . we should be clear . but when it comes to writing down when you do these things is it here ? you have to a write the values this can take . grad e: and here i was really in some s sometimes i was really standing in front of a wall feeling very stupid because this case it 's pretty simple , but as we will see the other ones if it 's a running budget so what are the discrete values of a running budget ? so maybe my understanding there is too impoverished . grad e: how can i write here that this is something , a number that cr keeps on changing ? but ok . thus is understandable ? professor b: you ' ve s have you seen this before keith , these belief - net things ? grad e: so here is the we had that the user 's budget may influence the outcome of decisions . there we wanted to keep a running total of things . grad d: is this like a number that represents how much money they have left to spend ? ok , h how is it different from user finance ? grad e: the finance is here thought of as the financial policy a person carries out in his life , he is he cheap , average , or spendy ? grad e: and i did n't come maybe a user i , i did n't want to write greediness , but professor b: . so keith w what 's behind this is actually a program that will once you fill all this in actually s solve your belief - nets for you and . professor b: so this is not just a display , this is actually a gui to a simulator that will if we tell it all the right things we 'll wind up with a functioning belief - net at the other end . grad e: ok , so here was ok , think of people being cheap , average , or spendy or we can even have a finer scale moderately cheap , grad e: does n't matter . agree there but here i was n't what to write in . grad d: , you ' ve written in what seems to be required like what else is do you want ? professor b: . so here 's what 's permissible is that you can arrange so that the value of that is gon na have to be updated and n it 's not a belief update , it 's you took some actions , you spent money and , so the update of that is gon na have to be essentially external to the belief - net . right ? and then what you 're going to need is for the things that it influences . let 's first of all let 's see if it does influence anything . and if it does influence anything then you 're gon na need something that converts from the number here to something that 's relevant to the decision there . so it could be ra they create different ranges that are relevant for different decisions or whatever but for the moment this is just a node that is conditioned externally and might influence various things . grad d: the other thing is that every time that 's updated beliefs will have to be propagated but then the question is do you do we wanna propagate beliefs every single time it 's updated or only when we need to ? professor b: , that 's a good question . and does it have a lazy mode ? i do n't remember . grad d: , in srini 's thing there was this option like proper inferences which suggests that does n't happen , automatically . professor b: right . s probably does . someone has to track that down , but i but and and actually professor b: one of the we w items for the user home base should be essentially non - local . i they 're only there for the day and they do n't have a place that they 're staying . grad e: just accidentally erased this , had values here such as is he s we had in our list we had " is he staying in our hotel ? " , is he staying with friends ? , and so we 're professor b: so it 's clear where w where we are right now . so my suggestion is we just pick professor b: one , one particular one of the let 's do the first one let 's do the one that we already think we did so w that was the of the endpoint ? professor b: no , he has he has n't filled them in yet , is what 's true . grad e: did i or did n't i ? . probably nothing done yet , did it on the upper ones , ok . makes sense . ok , so this was eva . maybe we can think of more things , cross professor b: , would be a f for a given segment . , you y you go first go the town square grad a: no , if you go to re if you go to prague or whatever one of your key points that you have to do is cross the charles bridge and does n't really matter which way you cross which where you end up at the end but the part the good part is walking over it , so . professor b: that 's subtle , but true . so let 's just leave it three with three for now professor b: and let 's see if we can get it linked up just to get ourselves started . professor b: you 'll see it you 'll see something comes up immediately , that the reason i wanna do this . grad e: w the user was definitely more likely to enter if he 's a local more likely to view if he 's a tourist and then we had the fact that given the fact that he 's thrifty and there will be admission then we get all these cross professor b: we did , but the three things w that it contributed to this , the other two are n't up there . so one was the ontology professor b: ok , so this is w right , so what w i what we seem to need here , this is why it starts getting into the technical professor b: the way we had been designing this , there were three intermediate nodes which were the endpoint decision as seen from the user model as seen from the ontology and as seen from the discourse . so each of those the way we had it designed , now we can change the design , but the design we had was there was a decision with the same three outcomes based on the th those three separate considerations so if we wanted to do that would have to put in three intermediate nodes professor b: and then what you and i have to talk about is , ok if we 're doing that and they get combined somehow how do they get combined ? but the they 're undoubtedly gon na be more things to worry about . grad e: so that 's w in our in johno 's pictogram everything that could contribute to whether a person wants to enter , view , or approach something . professor b: , it was called mode , so this is m mode here means the same as endpoint . professor b: alright . , but that was actually , unfortunately that was a an intermediate versio that 's i do n't think what we would currently do . grad a: can i ask about " slurred " and " angry " as inputs to this ? grad c: if the if the person talking is angry or slurs their speech they might be tired or , professor b: but that 's - that seems to , so so my advice to do is get this down to what we think is actually likely to be a strong influence . but , that was what he had in mind . professor b: so let 's think about this question of how do we wanna handle so there 're two separate things . one is at least two . one is how do we want to handle the notion of the ontology now what we talked about , and this is another technical thing bhaskara , is can we arrange so that we can so that the belief - net itself has properties and the properties are filled in from on ontology items . so the let 's take the case of the this endpoint thing , the notion was that if you had a few key properties like is this a tourist site , some landmark is it a place of business is it something you physically could enter ok , et cetera . so that there 'd be certain properties that would fit into the decision node and then again as part of the ou outer controlling conditioning of this thing those would be set , so that some somehow someone would find this word , look it up in the ontology , pull out these properties , put it into the belief - net , and then the decision would flow . now grad e: seems to me that we ' ve e embedded a lot , embedded a lot of these things we had in there previously in some of the other final decisions done here , if we would know that this thing is exhibiting something if it 's exhibiting itself it is a landmark , meaning more likely to be viewed if it is exhibiting pictures or sculptures and like this , then it 's more likely to be entered . professor b: i that 's completely right and that 's good , so what that says is that we might be able to take and in particular so the ones we talked about were exhibiting and selling grad e: if it 's closed one probably wo n't enter . or if it 's not accessible to a tourist ever the likelihood of that person actually wanting to enter it , given that he knows it , . professor b: so let me suggest this . w could you move those up about halfway . the ones that you th and selling i . professor b: so here 's what it looks like to me . is that you want an intermediate structure which i is essentially the or of for this purpose of selling , f fixing , or servicing . so that it that is , for certain purposes , it becomes important but for this purpose one of these places is quite like the other . does that seem right ? so we di professor b: if we yes . so if it may be more than endpoint decisions , so the idea would be that you might wanna merge those three professor b: it i here 's where it gets a little tricky . from the belief - net point of view it is from another point of view it 's interest it 's important to it 's selling or servicing and . so for this decision it 's just true or false and in th this is a case where the or seems just what you want . that that if any of those things is true then it 's the place that you professor b: you could , . , so let 's do that . no no , no to an inter no , an intermediate node . grad e: so are they the is it the object that sells , fixes , or services things ? professor b: say w it 's co i would s a again for this purpose it 's commercial . someplace you want to go in to do some business . grad d: what does the underscore - t at the end of each of those things signify ? grad e: things . so places that service things sell things or fix things and pe places that e exhibit things . grad a: so we 're deriving this the this feature of whether the main action at this place happens inside or outside or what we 're deriving that from what activity is done there ? could n't you have it as just a primitive feature of the entity ? grad a: it seems like that 's much more reliable cuz you could have outdoor places that sell things and indoor places that do something else professor b: , the problem with it is that it putting in a feature just for one decision , professor b: now w we may wind up having to do that this i anyway , this i at a mental level that 's what we 're gon na have to sort out . so , what does this look like , what are intermediate things that are worth computing , what are the features we need in order to make all these decisions and what 's the best way to organize this so that it 's clean and consistent and all that . grad a: i ' m just thinking about how people , human beings who know about places and places to go and so on would store this and it would probably you would n't just remember that they sell and then deduce from that it must be going on inside . grad e: an entity maybe should be regard as a vector of several possible things , it can either do s do sell things , fix things , service things , exhibit things , it can be a landmark at the same time as doing these things , it 's not either or mmm certainly a place can be a hotel and a famous site . many come to mind . things can be generally a landmark and be accessible . ie a castle or can be a landmark a or not accessible , some statue can go inside . professor b: anyway so let me suggest you do something else . which is to get rid of that l long link between who the user and the endpoint . professor b: no no , i do n't want the link there . because what we 're gon na want is an intermediate thing which is the endpoint decisi the endpoint decision based o on the user models , so what we talked about is three separate endpoint decisions , so let 's make a new node grad c: just as a suggestion maybe you could " save as " to keep your old one and clean and so you can mess with this one . professor b: let 's say underbar - u , so that 's the endpoint decision as seen through the professor b: underscore - e for entity , and we may change all this , but . and grad e: ok , should n't i be able to move them all ? no . or ? can i ? where ? what ? professor b: i d i . actually , i the easiest thing would move mo move the endpoint , go ahead . just do whatever . professor b: and maybe th maybe it 's just one who is the user , i , maybe there 's more . grad e: if he 's usi if he 's in a car right now what was that people with harry drove the car into the cafe professor b: never mind . anyway , this is crude . now but the now so but then the question is so and we assume that some of these properties would come indirectly through an ontology , but then we had this third idea of input from the discourse . professor b: , maybe , i again , i d , ok , put in but what we 're gon na wanna do is actually grad e: here this was one of my problems we have the user interest is a vector of five hundred values , so that 's from the user model , grad d: right . so why is it , so it 's like a vector of five hundred one 's or zero 's ? professor b: so you cou and so here let me give you two ways to handle that . alright ? one is you could ignore it . but the other thing you could do is have an and this will give you the flavor of the of what you could have a node that 's that was a measure of the match between the object 's feature , the match between the object the entity , i ' m and the user . so you could have a k a " fit " node that would have to be computed by someone else but so that grad a: user schedule . do i have time to go in and climb all the way to the top of the koelner dome or do have to " time to take a picture of the outside ? " professor b: that 's what we do n't wanna do , see that se cuz then we get into huge combinatorics and like that an grad c: cuz if the , and if the user is tired , the user state , right , it would affect , but i ca n't see why e anything w everything in the model would n't be professor b: , but , that 's we ca n't do that , so we 're gon na have to but this is a good discussion , we 're gon na have to somehow figure out some way to encapsulate that so if there 's some general notion of the relation to the time to do this to the amount of time the guy has like that is the compatibility with his current state , so that 's what you 'd have to do , you 'd have to get it down to something which was itself relatively compact , so it could be compatibility with his current state which would include his money and his time and his energy grad d: no but , it 's more than that , like the more you break it up like because if you have everything pointing to one node it 's like exponential whereas if you like keep breaking it up more and more it 's not exponential anymore . professor b: so it , there are two advantages . that 's tha there 's one technical one grad c: s so we 'd be doing subgrouping ? subgrouping , into mo so make it more tree like going backwards ? professor b: but it there 's two advantages , one is the technical one that you do n't wind up with such big exponential cbt 's , professor b: the other is it can be it presumably can be used for multiple decisions . so that if you have this idea of the compatibility with the requirements of an action to the state of the user one could imagine that was u not only is it sim is it cleaner to compute it separately but it could be that it 's used in multiple places . anyway th so in general this is the design , this is really design problem . ok , you ' ve got a signal , a d set of decisions how do we do this ? grad e: what do i have under user state anyhow cuz i named that already something . that 's tired , fresh , maybe should be renamed into physical state . grad c: i the question is it 's hard for me to imagine how everything would n't just contribute to user state again . or user compatibility . grad e: the user interests and the user who the user is are completely apart from the fact whether he is tired broke grad c: , but other though the node we 're creating right now is user compatibility to the current action , right ? grad c: seems like everything in the user model would contribute to whether or not the user was compatible with something . professor b: maybe not . the that 's the issue is would even if it was true in some abstract general sense it might not be true in terms of the information we actually had and can make use of . and anyway we 're gon na have to find some way to cl get this sufficiently simple to make it feasible . grad e: maybe if we look at the if we split it up again into if we look at the endpoint again we said that for each of these things there are certain preconditions so you can only enter a place if you are not too tired to do so and also have the money to do so if it costs something so if you can afford it and perform it is preconditions . viewing usually is cheap or free . is that always true ? i . professor b: w w but that viewing it without ent view w with our definition of view it 's free cuz you grad a: what about the grand canyon , right ? no , never mind . are there large things that you would have to pay to get up close to like , never mind , not in the current professor b: no we have to enter the park . almost by definition paying involves entering , ge going through some professor b: so let me suggest we switch to another one , clearly there 's more work to be done on this but it 's gon na be more instructive to think about other decisions that we need to make in path land . and what they 're gon na look like . grad c: so you can save this one as and open up the old one , right and then everything would be clean . you could do it again . professor b: why , it 's worth saving this one but i 'd like to keep this one cuz i wanna see if we 're gon na reuse any of this . professor b: you tell me , so in terms of the planner what 's a good one to do ? grad e: let 's th this go there or not is a good one . is a very basic one . so what makes things more likely that professor b: the fir see the first thing is , getting back to thing we left out of the other is the actual discourse . so keith this is gon na get into your world because we 're gon na want to know , which constructions indicate various of these properties s and so i do n't yet know how to do this , i we 're gon na wind up pulling out discourse properties like we have object properties and we what they are yet . so that the go - there decision will have a node from discourse , and i why do n't we just stick a discourse thing up there to be as a placeholder for grad e: identified that and so again re that 's completely correct , we have the user model , the situation model here , we do n't have the discourse model here yet . much the same way as we did n't we do n't have the ontology here . professor b: the ontology we said we would pull these various kinds of properties from the ontology like exhibiting , selling , and . professor b: so in some sense it 's there . but the discourse we do n't have it represented yet . grad e: this be specific for second year ? and and we probably will have something like a discourse for endpoint . professor b: but if we do it 'll have the three values . it 'll have the eva values if we have it . professor b: for go - there , probably is true and false , let 's say . that 's what we talked about . grad e: , we 're looking at the little data that we have , so people say how do i get to the castle and this usually means they wanna go there . so this should push it in one direction however people also sometimes say how do i get there in order to find out how to get there without wanting to go there . grad e: and sometimes people say where is it because they wanna know where it is but in most cases they probably professor b: , but that does n't change the fact that you 're you want these two values . grad e: true . so this is some external thing that takes all the discourse and then says here it 's either , yay , a , or nay . ok ? professor b: and they 'll be a y , a user go - there and maybe that 's all , i . grad d: that definitely interes but that now that what 's the word the that interacts with the eva thing if they just wanna view it then it 's fine to go there when it 's closed whereas if they want to so professor b: right , so that 's where it starts getting to be essentially more interesting , so what bhaskara says which is completely right is if that they 're only going to view it then it does n't matter whether it 's closed or not in terms of , whether you wanna go there . grad d: , that 's what i said just having one situational node may not be enough because this that node by itself would n't distinguish professor b: i it can have di various values . , but we you 're right it might not be enough . grad d: , see i ' m thinking that any node that begins with " go - there " is either gon na be true or false . grad a: also , that node , the go - there s s node would just be fed by separate ones for , there 's different things , the strikes and the professor b: . so so now the other thing that bhaskara pointed out is what this says is that there sh should be a link , and this is where things are gon na get very messy from the endpoint decision professor b: maybe the t they 're final re and , i the very bottom endpoint decision to the go - there node . and i do n't worry about layout , then we 'll go nuts grad d: mmm . maybe we could have intermediate node that just the endpoint and the go - there s node fed into ? professor b: the go - there , actually the endpoint node could feed into the go - there s that 's right , professor b: so the endpoint node , make that up t to the go - there then we 'll have to do layout at some point , but something like that . now it 's gon na be important not to have loops . really important in the belief worl net world not to have loops professor b: no it 's much worse than that . it if i loo it 's not def i it 's not defined if you 're there are loops , professor b: you just you have to there are all sorts of ways of breaking it up so that there is n't grad e: but this is n't , this is this line is just coming from over here . professor b: , no it 's not a loop yet , i ' m just saying we , in no , in grad d: , but the good thing is we could have loopy belief propagation which we all love . professor b: ok , so anyway , so that 's another decision . what 's another decision you like ? grad e: ok , these have no parents yet , but i that does n't matter . right ? professor b: , the idea is that you go there , you go comes from something about the user from something about the situation and the discourse is a mystery . grad e: this is this comes from traffic and , . sh - should we just make some grad e: if there 's parking maybe mmm who cares . and if he has seen it already or not and , and discourse is something that should we make a keith note here ? that comes from keith . just so we do n't forget . have to get used to this . ok , whoops . professor b: and then also the discourse endpoint , i endpoint sub - d is if you wanna make it consistent . grad a: actually is this the right way to have it where go there from the user and go there from the situation just about each other but they both feed the go there decision because is n't the , grad a: , but that still allows for the possibility of the user model affecting our decision about whether a strike is the thing which is going to keep this user away from grad a: if you needed it to do that . but ok i was just thinking i maybe i ' m conflating that user node with possible asking of the user hey there 's a strike on , does that affect whether or not you wanna go grad a: or , so that might not come out of a user model but , directly out of interaction . professor b: i gu yes my curr , do n't that 's enough . my current idea on that would be that each of these decision nodes has questions associated with it . and the question would n't itself be one of these conditional things , given that there 's a strike do you still wanna go ? but if you told him a bunch of , then you would ask him do you wanna go ? but trying to formulate the conditional question , that sounds too much . professor b: because i want to do a little bit of organization . before we get more into details . the organization is going to be that the flavor of what 's going on is going to be that as we s e going to this detail keith is going to worry about the various constructions that people might use and johno has committed himself to being the parser wizard , professor b: so what 's going to happen is that eventually like by the time he graduates , ok they 'll be some system which is able to take the discourse in context and have outputs that can feed the rest of belief - net . i j wa i assume everybody knows that , wanna , get closure that 'll be the game then , so the semantics that you 'll get out of the discourse will be of values that go into the various discourse - based decision nodes . and now some of those will get fancier like mode of transportation and so it is n't by any means necessarily a simple thing that you want out . so if there is an and there is mode of transportation grad e: and it there 's a also a split if you loo if you blow this up and look at it in more detail there 's something that comes from the discourse in terms of what was actually just said what 's the utterance go giving us and then what 's the discourse history give us . professor b: , that , we 'll have to decide how much of th where that goes . professor b: an and it 's not clear yet . it could be those are two separate things , it could be that the discourse gadget itself integrates as which would be my that you 'd have to do see in order to do reference and like that you ' ve got ta have both the current discourse and the context to say i wanna go back there , what does that mean grad e: but is th is this picture that 's emerging here just my wish that you have noticed already for symmetry or is it that we get for each decision on the very bottom we get the sub - e , sub - d , sub - u and maybe a sub - o " for " ontology " meta node but it might just professor b: which is s if that 's true how do we wanna combine those ? o or when it 's true ? grad e: but this w wou would be though that , we only have at most four at the moment arrows going f to each of the bottom decisions . and four you we can handle . professor b: i it see i if it 's fou if it 's four things and each of them has four values it turns out to be a big cpt , it 's not s completely impossi it 's not beyond what the system could solve but it 's probably beyond what we could actually write down . or learn . professor b: it 's and i do n't think it 's gon na g e i do n't think it 'll get worse than that , so le that 's a good grad e: but but four did n't we decide that all of these had true or false ? so is it 's four professor b: for go there , but not f but not for the other one 's three values for endpoint already . grad d: , you need actually three to the five because if it has four inputs and then it itself has three values it can get big fast . professor b: each so you 're from each point of view you 're making the same decision . so from the point of view of the ob of the entity grad d: this and also , the other places where , like consider endpoint view , it has inputs coming from user budget , user thrift so even professor b: those are not necessarily binary . s so we 're gon na have to use some t care in the knowledge engineering to not have this explode . and it does n't in the sense that read it , actually with the underlying semantics and it is n't like you have two hundred and fifty - six different ways of thinking about whether this user wants to go to some place . so we just have to figure out what the regularities are and code them . but what i was gon na suggest next is maybe we wanna work on this a little longer but i do want to also talk about the thing that we started into now of it 's all fine to say all these arrows come into the si same place what rule of combination is used there . so th yes they so these things all affect it , how do they affect it ? and belief - nets have their own beliefs about what are good ways to do that . so is it 's clearer n clear enough what the issue is , so do we wanna switch that now or we wanna do some more of this ? grad e: r w we just need to in order to get some closure on this figure out how we 're gon na get this picture completely messy . professor b: , here he here 's one of the things that i th you sh you no , i how easy it is to do this in the interface but you it would be great if you could actually just display at a given time all the things that you pick up , you click on " endpoint " , and everything else fades and you just see the links that are relevant to that . and i does anybody remember the gui on this ? grad c: d i would almost say the other way to do that would be to open u or make n - many belief - nets and then open them every time you wanted to look at a different one vers cuz grad e: i have each of these thing each of the end belief - nets be a page and then you click on the thing and then li consider that it 's respective , professor b: the b anyway so it clear that even with this if we put in all the arrows nobody is gon na be able to read the diagram . alright , so e we have to figure out some display hack to do this because anyway i let me consi suggest that 's a s not a first - order consideration , we have two first - order considerations which is what are the influences a , and b how do they get combined mathematically , how do we display them is an issue , grad c: i do n't , do n't think this has been designed to support something like that . grad d: , it might soon , if this is gon na be used in a serious way like java base then it might soon be necessary to start modifying it for our purposes . professor b: , and i that seems like a perfectly feasible thing to get into , but we have to we want first . ok , so why do n't you tell us a little bit about decision nodes and what the choices might be for these ? grad d: i this board works fine . so recall the basic problem which is that you have a belief - net and you have like a lot of different nodes all contributing to one node . right ? so as we discussed specifying this thing is a big pain and it 's so will take a long time to write down because if these s have three possibilities each and this has three possibilities then you have two hundred and forty - three possibilities which is already a lot of numbers to write down . so what helps us in our situation is that these all have values in the same set , right ? these are all like saying ev or a , so it 's not just a generalized situation like we wanna just take a combination of we wanna view each of these as experts ea who are each of them is making a decision based on some factors and we wanna combine their decisions and create , sorta weighted combination . grad d: so the problem is to specify the so the conditional property of this given all those , that 's the way belief - nets are defined , like each node given its parents , so that 's what we want , we want p of let 's call this guy y and let 's call these x - one , x - two xn , so we want probability that y equals , e given that these guys are i 'll just refer to this as like x hat , the co like all of them ? given that the data says , a , v , a , e , so we would like to do this combination . professor b: alright , so is that i , wanna make everybody is with us before he goes on . grad d: so , right . so what we do n't wanna do is to for every single combination of e and v and a and every single letter e , s give a number because that 's not desirable . what we wanna do is find some principled way of saying what each of these is and we want it to be a valid probability distribution , so we want it to add up to one , so those are the two things that we need . so what i , what jerry suggested earlier was that we , view these guys as voting and we just take the we essentially take averages , so here two people have voted for a , one has voted for v , and one has voted for e , so we could say that the probabilities are , probability of being e is one over four , because one person voted for e out of four and similarly , probability of so this is probability of e s and then probability of a given all that is two out of four and probability of v is one out of four . so that 's step that 's the that 's the basic thing . now is that all ok ? grad e: it 's x - one voted for a x - two voted for v and ? professor b: y right . s so this assumes symmetry and equal weights and all this things , which may or may not be a good assumption , grad d: so step two is so we ' ve assumed equal weights whereas it might turn out that , some w be that , what the actual the verbal content of what the person said , like what might be somehow more important than the grad d: , so we do n't wanna like give them all equal weight so currently we ' ve been giving them all weight one fourth so we could replace this by w - one , w - two , w - three , and w - four and in order for this to be a valid probability distribution for each x - hat , we just need that the w 's sum to one . so they can be , you could have point one , point three , point two , and point four , say . grad c: so i jus just to make i understand this , so in this case we would still compute the average ? grad c: ok , so it 'd be so in this case the probability that y equals a would be w one times grad d: so these numbers have been replaced with point one , point three , point two , and point four . so you can view these as gone . probability of so , alright . so this is step two . so the next possibility is that we ' ve given just a single weight to each expert , right , whereas it might be the case that in certain situations one of the experts is more reliable and in certain situations the other expert is more reliable . so the way this is handled is by what 's called a mixture of experts , so what you can have is you augment these diagrams like this you have a new thing called " h " , ok ? this is a hidden variable . and what this is it gets its input from x - one , x - two , x - three , and x - four , and what it does is it decides which of the experts is to be trusted in this particular situation . and then these guys all come here . so this is sightly more complicated . so what 's going on is that this h node looks at these four values of those guys and it decides in given these values which of these is n't likely to be more reliable or most reliable . so h produces some , it produces a number , either one , two , three , or four , in our situation , now this guy he looks at the value of h say it 's two , and then he just selects the thing . that 's all there is to say , i about it . right , so you can have a mixture that grad a: so so the function of the thing that comes out of h is very different from the function of the other inputs . it 's driving how the other four are interpreted . grad c: it 's to tell the bottom node which one of the situations that it 's in or which one of the weighting systems grad c: w i was just , if you wanted to pay attention to more than one you could pass a w a weighting s system though too , could n't you ? grad a: does h have to have another input to tell it alpha , beta , whatever , or is the that 's determined by what the experts are saying , like the type of situ it it just seems that like without that outside input that you ' ve got a situation where , like if x - one says no , a low value coming out of x - on or i if x - one says no then ignore x - one , that seems like that 'd be weird , grad d: , could be things like if x - two and x - three say yes then i ignore x - one also . grad c: the situations that h has , are they built into the net or ok , so they could either be hand coded or learned or based on training data , so you specify one of these things for every one of those possi possible situations . grad d: , to learn them we need data , where are we gon na get data ? we need data with people intentions , which is slightly tricky . but what 's the data about like , are we able to get these nodes from the data ? grad a: like how thrifty the user is , or do we have access to that ? right . good . grad d: , but that 's my question , like how do we , how do we have data about something like endpoint sub - e , or endpoint sub s s ? grad c: , you would say , based on in this dialogue that we have which one of the things that they said whether it was the entity relations or whatever was the thing that determined what mode it was , grad d: so this is what we wanna learn . i do n't think , you have a can you bring up the function thing ? w where is the thing that allows you to grad d: function properties , is that it ? , i not . , that 's and it so e either it 'll allow us to do everything which is unlikely , more likely it 'll allow us to do very few of these things and in that case we 'll have to just write up little things that allow you to create such cpu 's on your own in the java base format . , i was assuming that 's what we 'd always do because i was assuming that 's what we 'd always do , it 's grad c: in terms of java base it 's what you see is what you get in i do n't i would be surprised if it supports anything more than what we have right here . grad a: just talking about that general end of things is there gon na be data soon from what people say when they 're interacting with the system and so on ? like , what questions are being given being asked ? cuz fey , you mean . o ok . i ' m just wondering , because in terms of , w the figure i was thinking about this figure that we talked about , fifty constructions or whatever that 's a whole lot of constructions and , one might be f fairly pleased with getting a really good analysis of five maybe ten in a summer so , i know we 're going for a rough and ready . , i was talking about the , if you wanted to do it really in detail and we do n't really need all the detail for what we 're doing right now but anyway in terms of just narrowing that task which fifty do i do , i wanna see what people are using , so , it will inspire me . , . touche . good enough . ###summary: a detailed diagram of the belief-net had already been disseminated. its structure was discussed during the meeting. there are several endpoints ( user , ontology , discourse etc ) with separate eva ( enter/view/approach ) values. details of how different inputs feed into them were discussed at length. ideas mentioned included grouping features of buildings like "selling" , "fixing" and "exhibiting" , as well as creating a user-compatibility node that would take different values depending on the situation and the user status. similarly , a go-there ( towards a building ) node can be influenced by things like the user's budget and discourse parameters amongst other things. the latter are still ill-defined at this stage. the study of the linguistic constructions that people use in this kind of navigational domain is expected to be prove useful in that respect. as each node in the tree is the decision point of the combination of its parent nodes , which rules govern this combination is an important issue. there are several approaches ranging from simply averaging the inputs to using a hidden variable in order to weight them differently depending on context. if the latter architecture is used , the net could -to an extent- be trained with the data that is currently being collected. although this was mainly a brainstorming meeting , some minor tasks were allocated for the near future. since the net architecture and possible decision algorithms were discussed , it is necessary to examine how much of this javabayes can accommodate and , if not , what modifications would be necessary. additionally , the german partners visiting the institute will need to see some results of the new system design. finally , the analysis of the linguistic constructions for the current research domain can begin even with limited data , as , at this stage , they need not be very detailed. the is only a diagrammatic view of how the decision tree for the eva task looks like. a lot of the details have been glossed over: the user model can potentially comprise a huge number of factors; a planning "go-there" node needs input from several other areas of the net; there are intricate interactions between discourse and the situation model. similarly , what discourse properties are of importance and how they influence eva probabilities is still a mystery. on a more general note , there is also the question of whether the net should be updated continuously or only when it is needed. no final decision was taken as to the rules of computation applying in the belief-net. the more interesting solutions would ideally require training data , and it is still debatable whether the current collection would be appropriate for this particular task. in any case , how different architectures can be implemented in javabayes and what modifications would be necessary for the purposes of this project also need to be investigated. a simulator of the set of influence links forming the belief-net was created and put up for discussion. different sections of the analysis , such as the user model , the ontology and the discourse are represented as a layer of nodes each with its own eva probabilities. they form endpoints to which other nodes like go-there , user_budget , user_thrift and prosody feed into. the second presentation concerned the set of computational rules that are to be used with the net. the simple way to decide on the final output is the majority vote ( which e , v or a form the majority of the parent nodes' outputs ). this assumes that all inputs are of equal importance. alternatively , each input can be weighted in a fixed way. a third option is to create a hidden variable that makes the decision of which of the inputs is more trusted in a particular situation. the same variable can potentially also change the weighting of each input.
40
grad e: no , she told me a long time ago . she told me she told me like two weeks ago . grad e: no . you should be at least be self - satisfied enough to laugh at your own jokes . professor f: once again , right , together . , so we have n't had a meeting for a while , and probably wo n't have one next week , a number of people are gone . , so robert , why do n't you bring us up to date on where we are with edu ? grad b: , in a smaller group we had , talked and decided about continuation of the data collection . so fey 's time with us is almost officially over , and she brought us some thirty subjects and , t collected the data , and ten dialogues have been transcribed and can be looked at . if you 're interested in that , talk to me . , and we found another , cogsci student who 's interested in playing wizard for us . here we 're gon na make it a little bit more complicated for the subjects , this round . she 's actually suggested to look , at the psychology department students , because they have to partake in two experiments in order to fulfill some requirements . so they have to be subjected , before they can actually graduate . and , we want to design it so that they really have to think about having some time , two days , to plan certain things and figure out which can be done at what time , and , package the whole thing in a re in a few more complicated , structure . that 's for the data collection . as for smartkom , i ' m the last smartkom meeting i mentioned that we have some problems with the synthesis , which as of this morning should be resolved . and , so , grad b: should be means they are n't yet , but i have the info now that i need . plus , johno and i are meeting tomorrow , so maybe , when tomorrow is over , we 're done . and ha n hav we 'll never have to look at it again maybe it 'll take some more time , to be realistic , but at least we 're seeing the end of the tunnel there . that was that . the , i do n't think we need to discuss the formalism that 'll be done officially s once we 're done . something happened , in on eva 's side with the prm that we 're gon na look at today , we have a visitor from bruchsal from the international university . andreas , you ' ve met everyone except nancy . grad b: that will be reuter ? so my scientific director of the eml is also the dean of the international university , one of his many occupations that just contributes to the fact that he is very occupied . the , he @ might tell us a little bit about what he 's actually doing , and why it is s somewhat related , and by using maybe some of the same technologies that we are using . and . was that enough of an update ? grad d: , . , so , i ' ve be just been looking at , ack ! what are you doing ? i ' ve been looking at the prm . so , this is , like the latest thing i have on it , and i sorta constructed a couple of classes . like , a user class , a site class , and , a time , a route , and then and a query class . and i tried to simplify it down a little bit , so that actually , look at it more . it 's the same paper that i gave to jerry last time . so i took out a lot of , a lot of the decision nodes , and then tried to the red lines on the , graph are the , relations between the different , classes . like , a user has like , a query , and then , also has , reference slots to its preferences , the special needs and , money , and the user interest . and so this is more or less similar to the flat bayes - net that i have , with the input nodes and all that . and so i tried to construct the dependency models , a lot of these i got from the flat bayes - net , and what they depend on , and it turns out , the cpt 's are really big , if i do that , so i tried to see how do , put in the computational nodes in between . and what that would look like in a prm . and so i ended up making several classes actually , a class of with different attributes that are the intermediate nodes , and one of them is like , time affordability money affordability , site availability , and the travel compatibility . and so some of these classes are s some of these attributes only depend on from , say , the user , or s f just from , i , like the site . s like , these here , it 's only like , user , but , if you look at travel compatibility for each of these factors , you need to look at a pair of , what the , preference of the user is versus , what type of an event it is , or , which form of transportation the user has and whether , the onsite parking matters to the user , in that case . and that makes the scenario a little different in a prm , because , then you have one - user objects and potentially you can have many different sites in mind . for each of the site you 'll come up with this rating , of travel compatibility . and , they all depend on the same users , but different sites , and that makes a i ' m tr i w i wa have been trying to see whether the prm would make it more efficient if we do inferencing like that . and so , i you end up having fewer number of nodes than in a flat bayes - net , cuz otherwise you would c , it 's probably the same . but , no , you would definitely have be able to re - use , like , all the user , and not having to recompute a lot of the , because it 's all from the user side . so if you changed sites , you can , save some work on that . , in the case where , it depends on both the user and the site , then i ' m still having a hard time trying to see how , using the prm will help . , so anyhow , using those intermediate nodes then , this would be the class that represent the intermediate nodes . and that would it 's just another class in the model , with , references to the user and the site and the time . and then , after you group them together this no the dependencies would of the queries would be reduced to this . and so , it 's easier to specify the cpt and all . , so that 's about as far as i ' ve gone on the prm . right . grad d: ok . so it only makes two decisions , in this model . and one is how desirable a site is meaning , how good it matches the needs of a user . and the other is the mode of the visit , whether th it 's the eva decision . so , instead of , doing a lot of , computation about , which one site it wants of the user wants to visit , i 'll come , try to come up with like , a list of sites . and for each site , where h how it fits , and a rating of how it fits and what to do with it . so . anything else i missed ? professor f: so that was pretty quick . she 's ac eva 's got a little write - up on it that , probably gives the details to anybody who needs them . , so the you you did n't look yet to see if there 's anybody has a implementation . professor f: ok . so one so one of the questions , about these p r ms is professor f: , we are n't gon na build our own interpreter , so if we ca n't find one , then we , go off and do something else and until s one appears . , so one of the things that eva 's gon na do over the next few weeks is see if we can track that down . the people at stanford write papers as if they had one , but , we 'll see . so w anyway . so that 's a major open issue . if there is an interpreter , it looks like , what eva 's got should run and we should be able to actually , try to solve , the problems , to actually take the data , and do it . , and we 'll see . , i actually think it is cleaner , and the ability to instantiate , instance of people and sites and , will help in the expression . whether the inference gets any faster or not i . , it would n't surprise me if it does n't . , it 's the same information . there are things that you can express this way which you ca n't express in a normal belief - net , without going to some incredible hacking of rebuilding it on the fly . , the notion of instantiating your el elements from the ontology and fits this very nicely and does n't fit very into the extended belief - net . so that was one of the main reasons for doing it . i . so , people who have thought about the problem , like robert i it looked to me like if eva were able to come up with a , value for each of a number of , sites plus its eva thing , that a travel planner should be able to take it from there . and , with some other information about how much time the person has and whatever , and then plan a route . grad b: - , , first of all , great looks , mu much cleaner , nnn , certain certain beauty in it , so , if beauty is truth , then , we 're in good shape . but , as , mentioned before we probably should look at t the details . so if you have a write - up then , i 'd love to read it and because , i can you go all the way back to the very top ? , these @ these w when these are instantiated they take on the same values ? that we had before ? grad d: some of the things might that might be different , maybe like are that the hours for the site . and , eventually i meant that to mean whether they 're open at this hour or not . and status would be , more or less like , whether they 're under construction , and or like that . grad b: and the , other question i would have is that presumably , from the way the stanford people talk about it , you can put the probabilities also on the relations . if professor f: , i that 's that was actually in the previous the ubenth . i do n't remember whether they carried that over to this or not , grad b: it 's in the definition or in the in daphne 's definition of a prm is that classes and relations , and you 're gon na have cpt 's over the classes and their relations . grad d: i remember them learning when , you the structure for , but i do n't remember reading how you specify professor f: so , the plan is when daphne gets back , we 'll get in touch and supposedly , we 'll actually get s deep connected to their work and somebody 'll , if it 's a group meeting once a week probably someone 'll go down and , whatever . so , we 'll actually figure all this out . grad b: then the w long term perspective is pretty clear . we get rocking and rolling on this again , once we get a package , if , when , and how , then this becomes foregrounded profiled , focused , again . grad b: until then we 'll come up with a something that 's @ that 's way more complicated for you . right ? because this was laughingly easy , grad d: actually i had to take out a lot of the complicated , cuz i made it really complicated in the beginning , and jerry was like , " this is just too much " . professor f: . so , you could , from this , go on and say suppose there 's a group of people traveling together and you wanted to plan something that somehow , with some pareto optimal , thing for professor f: that 's the that would be a , you could sell it , as a ok , you do n't have to fight about this , just give your preferences to the grad b: but what does it would a pote potential result be to split up and never talk to each other again ? professor f: anyway . so there i there are some u , , elaborations of this that you could try to put in to this structure , but i do n't think it 's worth it now . because we 're gon na see what else we 're gon na do . but , it 's good , and there were a couple other ideas of , things for eva to look at in the interim . grad b: good . then , we can move on and see what andreas has got out his sleeve . or andy , for that matter ? grad c: ok . so , for having me here , first of all . , so maybe just a little background on my visit . so , i ' m not really involved in any project , that 's relevant to you , a at the moment , the reason is really for me , to have an opportunity to talk to some other researchers in the field . and and so i 'll just n give you a real quick introduction to what i ' m working on , and , hope that you have some comments or , maybe you 're interested in it to find out more , and so i 'll be , happy to talk to you and , i 'd also like to find out some more and maybe i 'll just walk around the office and then and ask some questions , in a couple days . so i 'll be here for , tomorrow and then , the remainder of , next week . ok , so , what i started looking at , to begin with is just , content management systems , i in general . so , what 's the state of the art there is to you have a bunch of documents or learning units or learning objects , and you store meta - data , associate to them . so there 's some international standards like the i - triple - e , there 's an i - triple - e , lon standard , and , these fields are pretty straightforward , you have author information , you have , size information , format information and so on . , but they 're two fields that are , more interesting . one is you store keywords associated with the document , and one is , you have a , , what is the document about ? so it 's some taxonomic , ordering of the units . now , if you put on your semantic glasses , you say , that 's not all that easy , because there 's an implicit , assumption behind that is that , all the users of this system share the same interpretation of the keyword and the same interpretation of , whichever taxonomy is used , and , that 's a very that 's a key point of these systems and they always brush over this real quickly without really elaborating much of that and , the only thing that m really works out so far are library ordering codes , which are very , very coarse grain , so you have some like , science , biology , and then but that 's really all that we have at the moment . so there 's a huge , need for improvement there . now , what this a standard like this would give us is we could , with a search engine just query , different repositories all over the world . but we ca n't really , so what i ' m what i try to do is , to have , so . so the scenario is the following , you 're working on some project and you encounter a certain problem . now , what we have at our university quite a bit is that , students , try to u program a certain assignment , they always run into the same problems , and they always come running to us , and they 'll say why 's it not it 's not working , and we always give out the same answer , so we thought , it 'd be to have a system that could take care of this , and so , what i want to build is a smart f a q system . now , what you need to do here is you need to provide some context information which is more elaborate than " i ' m looking for this and this keyword . " so . and that i do n't need to tell you this . i ' m you have the same when somebody utters a sentence in a certain , context it , and the same sentence in another context makes a huge difference . so , i want to be able to model information like , so in the context of developing distributed systems , of a at a computer science school , what software is the person using , which homework assignment is he or she working on at the moment , maybe what 's the background of that student 's , which error message was encountered . so this information should be transmitted , when a certain document is retrieved . now , giving this so we somehow need to have a formalized , way of writing this down , and that 's where the shared interpretation of certain terms and keywords comes in again . and , using this and some , knowledge about the domain you can do some simple inferences . like that when somebody 's working about , working on servlets , he 's using java , cuz servlets are used are written in java . so some inferences like that , now , u using this you can infer more information , and you could then match this to the meta - data of off the documents you 're searching against . so , what i wanna do is have some given these inputs , and then compute how many documents match , and use this as a metric in the search . now , what i plan to do is i want to do a try to improve the quality of the search results , and i want to do this by having a depth , steepest descent approach . so if i knew which operating system the person was working on , would this improve my search result ? and and having , a symbolic formalized model of this i could simply compute that , and find out which questions are worth , asking . and that 's what i then propagate back to the user , and try to optimize the search in this way . now , the big problem that i ' m facing right now is , it 's fairly easy to hack up a system quickly , that works in the small domain , but the problem is the scalability . and , so robert was mentioning , earlier today is that , microsoft with their printer set up program has a bayesian network , which does exactly this , but there you face a problem that these are very hard to extend . and so , what i ' m what i try to do is try to model this , in a way that you could really combine , knowledge from very different sources , and , looking into some of the ideas that the semantic web community , came up with . trying to have , an approach how to integrate s certain representation of certain concepts and also some computational rules , what you can do with those . what i ' m also looking into is a probabilistic approach into this because document retrievals is a very fuzzy procedure , so it 's probably not that easy to simply have a symbolic , computational model . that that probably is n't expressive enough . so that 's another thing , which you 're also , looking into right now . and then , as an add - on to this whole idea , that would be now , depending on what the search engine or the content repository depending on which , rules and which ontologies it uses , or its view of the world , you can get very different results . so it might ma make a lot of sense to actually query a lot of different search engines . and there you could have an idea where you actually have a peer to peer approach , where we 're all carrying around our individual bookshelves , and , if you have a question about a homework , it 's probably makes sense to ask somebody who 's in your class with you , the guru in the certain area , rather than going to some yahoo - like , search engine . so these are some of the just in a nutshell , some of the ideas . and a lot of the even though it 's a very different domain , but a lot of the , issues are fairly similar . ok . grad a: and so some of the i how much about the larger heidelberg project , i are you grad a: so it seems like a lot of some of the issues are the same . it 's like , , the c context - based factors that influence how you interpret , grad a: , s how to interpret . in in this case , infer in knowing wanting to kinds of things to ask . we - we ' ve talked about that , but we have n't worried too much about that end of the discourse . but maybe you guys had that in the previous models . grad b: , in a in one t one s mmm , small difference in a way , is that he does n't have to come up with an answer , but he wants to point to the places w grad c: that can point you to the right reference . i do n't wanna compute the answer , so it 's a little bit easier for me . grad b: . , you have to s still m understand what the content says about itself , and then match it to what you think the informational needs grad a: so you also do n't have to figure out what the content is . you 're just taking the keywords as a topic text , as grad c: i assume that the there will be learning systems that tag their content . and , m @ and what i envision is that you rather than just supplying a bunch of keywords you could for an faq you could state like a logic condition , when this document applies . so " this document explains how to set up your , mail account on linux " like this . so something very specific that you can then but the that the key point with these , learning systems is that , a learning system is only as good as the amount of content it carries . grad c: you can have the best learning system with the best search interface , if there 's no content inside of it , it 's not very useful . so ultimately because , developing these rules and these inference inferences is very costly , so , you must be able to reuse some existing , domain information , or ontologies that other people wrote and then try to integrate them , and then also search the entire web , rather than just the small , content management system . so that 's crucial for the success of or @ grad a: so , you 're not i ' m trying to figure out how it maps to the kinds of things that we ' ve talked about in this group , and , actually associated groups , cuz some of us do pretty detailed linguistic analyses , and i ' m guessing that you wo n't be doing that ? professor f: on the other hand , framenet could be useful . so do the framenet story ? professor f: . th - that 's another thing you might wanna look into while you 're here . professor f: because , , the standard story is that keyworks keywords evoke frames , and the frames may give you additional keywords or , if that a bunch of keywords , indicate a frame , then you can find documents that actually have the whole frame , rather th than just , individual professor f: so there 's a lot of , and people are looking at that . most of the work here is just trying to get the frames right . there 's linguists and there 's a lot of it and they 're busily working away . but there are some application efforts trying to exploit it . and this looks t it seems to be that this is a place where you might be able to do that . grad c: . . i ' m i could learn a lot about , just how to come up with these structures , cuz it 's very easy to whip up something quickly , but it maybe then makes sense to me , but not to anybody else , and if we want to share and integrate things , they must be designed really . grad b: remember the , prashant story ? the no linguistic background person that the iu sent over here . and andreas and i tried to come up wi or we had come up actually with a with him working on an interface for framenet , as it was back then , that would p do some of the work for this machine , which , never got done because prashant found a happy occupation professor f: w , i know , it he w he did w what he did was much more s sensible for him . grad b: which in the . but so i ' m just saying , the , we had that idea professor f: you were y no , this was supposedly an exchange program , and i we , it 's fine . we do n't care , but it just i ' m a little surprised that , andreas did n't come up with anyone else he wanted to send . professor f: i had forgotten a i to be honest with you , i 'd forgotten we had a program . professor f: no , no . there was a whole co there was a little contract signed . it was grad c: , . it 's ju it 's more the lack of students , really , and w we have all these sponsors that are always eager to get some teams . grad c: but if i were a student , i 'd love to come here , rather than work for some german { nonvocalsound } company , or grad c: , i did n't say anybody to anything to offend , except for the sponsors maybe , but professor f: right . so i thi tha that 's one of the things that might be worth looking into while you 're here . , unfortunately , srini , who is heavily involved in daml and all this is himself out of town . professor f: anyway , s , you 'll see you 'll certainly see a lot of the people there . grad a: the other person of is dan gildea ? because he did some work on topic spotting grad a: w , which is , you . i do n't depending on how you wanna integrate with that end , like , taking the data and fig you said the learning systems that figure out we there 's someone in icsi who actually has been working on has worked on that kinda , and he 's worked with frame net , so you could talk to him about , both of those things at once . and he just finished writing a draft of his thesis . i u dan gildea , gildea . grad a: , if you fal solve the problem , hope you can do one for us too . professor f: alright , was there anything else for this ? one of these times soon we 're gon na hear about construal . grad b: . i ' m . i have it was november two thousand three or some no . wh - i had something in my calendar . grad b: , maybe i can bribe my way out of this . so i did some double checking and it seems like spring break in two thousand one . professor f: , no , but he 's as you said , he 's , like the state legislature , he 's trying to offer us bribes . grad a: at least this is a private meeting . right , exactly , ok , that 's the link . grad b: this , they refused the budget again ? is it so about citris ? , still nothing . professor f: , this t the s we 're , involved in a literally three hundred million dollar , program . , with the state of california . and , the state of california is now a month and a half behind its legis its legally required date to approve a budget . so the budget has not been approved . and two days ago there 's two l , so , two branches of legislature . one branch approved it , and , yesterdayday there was this that the other branch would just approve it , but now there 's actually a little back sliding to people who approved it got flak from there , anyway . so , ! i have to tell you a wonderful story about this , ok ? and then we 'll go . so , i it turns out i wound up having lunch today with a guy named tom kalil . kill kalil . and , he now works at berkeley . he 's hired to run a lot of citris , even though we do n't have the money they so they ' ve been hiring people right and left , so , they think the money 's coming . so and he was , the chief staffer to clinton on technology matters . he was in the white house , i do n't remember what he was saying . a anyway , like that . and , is now doing all the politics for citris , but also , has a , a lot of interest in , actually doing things for society , so digital divide and like that . so that 's interesting to me but maybe not to you . but the really interesting thing was , he st he s said something about , i ' m interested in things that have high social multiplier , something that is of great social value . he said , " , this was his only example , if you had a adult literacy program that was as good as an individual tutor , bed017fdialogueact717 214849 21506 f professor s^e : s +1 0 and as compelling as a video game , bed017fdialogueact718 215138 21547 f professor s^e : s +1 0 then that would have a huge social impact . i said , " great ! that 's a good problem to work on . " anyway . so it was that , he 's got this view , of a , that 's what you should try to do , and b , language would be a good way to do it . professor f: this was . i did n't push him on the ch on the child thing , but , , a again , if you and this was literacy , which actually is somewhat different problem . maybe easier . so this is reading , rather than teaching another project we started on , and did n't get funded for was , to try to build an automatic tutoring program , for kids whose first language was n't english . which is like half the school population in california . something like that , is n't it ? enormous problem in california , and the idea was if we 're so smart about language understanding and speech understanding , could n't we build , programs that would be tutors for the kids . we think we could . anyway . so so but this is a slightly different problem , i know none of us have the spare time to look at it right now , but it i it 's interesting and i may , talk to him some more about is somebody already doing this , and like that . so anyway , that was today 's little story . grad b: ok . so i did manage to get pull my head out of the sling by sidetracking into citris , grad b: but or a temporarily putting it out of the sling but , i 'll volunteer to put it right back in by stating that i am n among some other things in the process of writing up that we have been discussing at our daily meetings , and also revising , for all the comments , the c the original construal proposal . and , if i put one and one together , i may end up with a number that 's greater than one and that potentially present once you get back . professor f: anyway . , so ok , so that 'd be great , but i 'd it 's time again , right ? . ok .
the first phase of the data collection has finished. there is a new wizard for phase two , during which subjects will be given more complex scenarios. also finished are the modifications on smartkom: the remaining glitches will take no more than a day to iron out. a big part of the meeting was covered by the presentation of the prm of the proposed system. an alternative representation of the bayes-net , it depicts context features as classes , and dependencies as relations between them. the current outputs show the desirability of a site , as well as its eva mode. the fact that this model allows for instantiations of classes fits the research purposes much better than the extended belief-net. following this , a visiting researcher presented an overview of a parallel project at the international university. it attempts to build a smart tutoring system for a computer science course. the assumption is that document searches can give more personalised results , if they take into account contextual parameters ( user , situation ). although no detailed linguistic analysis takes place , it was suggested that the use of framenet could be a useful approach. there were also further suggestions for meetings with icsi researchers. as the data collection is going into its second phase , more complex scenarios will be used to generate more intricate dialogues. subjects can be recruited from within the psychology department students , since such participation in experiments is compulsory in their syllabus. as the work on creating a prm for the system has progressed , it is now necessary to find an implementation that can work as a prm interpreter. there are no plans to build one from scratch , but it seems that other research groups at stanford may already have one. moreover , closer collaboration with the local prm group will be pursued with additional participation in their meetings. an early version of the prm of the system presented the same problem as a flat bayes-net: the cpt's become too large. another prm issue that arose and remained unclear was how the probabilities are specified ( instead of learnt ) on the actual relations between the classes. furthermore , the lack of a prm interpreter is an open issue. it is not within the remit of the project to build one from scratch , therefore , an existing implementation has to be found. on the other hand , the discussion about the smart tutoring system being built at the international university showed the importance of finding out which context parameters are influential in a given domain. it is easy to hack up a system for a small domain , but making it scalable is much more difficult. finally , there was a passing mention of problems encountered with the speech synthesis module of smartkom. the first phase of the data collection has been completed. thirty subjects were recorded in total. of those dialogues , ten have been transcribed. a new wizard will carry out the second phase. on top of this , a presentation of a prm of the proposed system took place. the prm comprises a set of classes , such as "user" , "site" , "route" , "time" and "query" with relations between them. another class incorporates a number of attributes ( "money affordability" , "travel compatibility" etc ) modelling the intermediate nodes of the bayes-net , as well as references to the other classes in the model. this model is much cleaner , and it also makes it easier to specify the cpt's. at this stage , the model makes two decisions ( outputs ): how much a site fits a user's needs and what the user intention is in eva terms.
###dialogue: grad e: no , she told me a long time ago . she told me she told me like two weeks ago . grad e: no . you should be at least be self - satisfied enough to laugh at your own jokes . professor f: once again , right , together . , so we have n't had a meeting for a while , and probably wo n't have one next week , a number of people are gone . , so robert , why do n't you bring us up to date on where we are with edu ? grad b: , in a smaller group we had , talked and decided about continuation of the data collection . so fey 's time with us is almost officially over , and she brought us some thirty subjects and , t collected the data , and ten dialogues have been transcribed and can be looked at . if you 're interested in that , talk to me . , and we found another , cogsci student who 's interested in playing wizard for us . here we 're gon na make it a little bit more complicated for the subjects , this round . she 's actually suggested to look , at the psychology department students , because they have to partake in two experiments in order to fulfill some requirements . so they have to be subjected , before they can actually graduate . and , we want to design it so that they really have to think about having some time , two days , to plan certain things and figure out which can be done at what time , and , package the whole thing in a re in a few more complicated , structure . that 's for the data collection . as for smartkom , i ' m the last smartkom meeting i mentioned that we have some problems with the synthesis , which as of this morning should be resolved . and , so , grad b: should be means they are n't yet , but i have the info now that i need . plus , johno and i are meeting tomorrow , so maybe , when tomorrow is over , we 're done . and ha n hav we 'll never have to look at it again maybe it 'll take some more time , to be realistic , but at least we 're seeing the end of the tunnel there . that was that . the , i do n't think we need to discuss the formalism that 'll be done officially s once we 're done . something happened , in on eva 's side with the prm that we 're gon na look at today , we have a visitor from bruchsal from the international university . andreas , you ' ve met everyone except nancy . grad b: that will be reuter ? so my scientific director of the eml is also the dean of the international university , one of his many occupations that just contributes to the fact that he is very occupied . the , he @ might tell us a little bit about what he 's actually doing , and why it is s somewhat related , and by using maybe some of the same technologies that we are using . and . was that enough of an update ? grad d: , . , so , i ' ve be just been looking at , ack ! what are you doing ? i ' ve been looking at the prm . so , this is , like the latest thing i have on it , and i sorta constructed a couple of classes . like , a user class , a site class , and , a time , a route , and then and a query class . and i tried to simplify it down a little bit , so that actually , look at it more . it 's the same paper that i gave to jerry last time . so i took out a lot of , a lot of the decision nodes , and then tried to the red lines on the , graph are the , relations between the different , classes . like , a user has like , a query , and then , also has , reference slots to its preferences , the special needs and , money , and the user interest . and so this is more or less similar to the flat bayes - net that i have , with the input nodes and all that . and so i tried to construct the dependency models , a lot of these i got from the flat bayes - net , and what they depend on , and it turns out , the cpt 's are really big , if i do that , so i tried to see how do , put in the computational nodes in between . and what that would look like in a prm . and so i ended up making several classes actually , a class of with different attributes that are the intermediate nodes , and one of them is like , time affordability money affordability , site availability , and the travel compatibility . and so some of these classes are s some of these attributes only depend on from , say , the user , or s f just from , i , like the site . s like , these here , it 's only like , user , but , if you look at travel compatibility for each of these factors , you need to look at a pair of , what the , preference of the user is versus , what type of an event it is , or , which form of transportation the user has and whether , the onsite parking matters to the user , in that case . and that makes the scenario a little different in a prm , because , then you have one - user objects and potentially you can have many different sites in mind . for each of the site you 'll come up with this rating , of travel compatibility . and , they all depend on the same users , but different sites , and that makes a i ' m tr i w i wa have been trying to see whether the prm would make it more efficient if we do inferencing like that . and so , i you end up having fewer number of nodes than in a flat bayes - net , cuz otherwise you would c , it 's probably the same . but , no , you would definitely have be able to re - use , like , all the user , and not having to recompute a lot of the , because it 's all from the user side . so if you changed sites , you can , save some work on that . , in the case where , it depends on both the user and the site , then i ' m still having a hard time trying to see how , using the prm will help . , so anyhow , using those intermediate nodes then , this would be the class that represent the intermediate nodes . and that would it 's just another class in the model , with , references to the user and the site and the time . and then , after you group them together this no the dependencies would of the queries would be reduced to this . and so , it 's easier to specify the cpt and all . , so that 's about as far as i ' ve gone on the prm . right . grad d: ok . so it only makes two decisions , in this model . and one is how desirable a site is meaning , how good it matches the needs of a user . and the other is the mode of the visit , whether th it 's the eva decision . so , instead of , doing a lot of , computation about , which one site it wants of the user wants to visit , i 'll come , try to come up with like , a list of sites . and for each site , where h how it fits , and a rating of how it fits and what to do with it . so . anything else i missed ? professor f: so that was pretty quick . she 's ac eva 's got a little write - up on it that , probably gives the details to anybody who needs them . , so the you you did n't look yet to see if there 's anybody has a implementation . professor f: ok . so one so one of the questions , about these p r ms is professor f: , we are n't gon na build our own interpreter , so if we ca n't find one , then we , go off and do something else and until s one appears . , so one of the things that eva 's gon na do over the next few weeks is see if we can track that down . the people at stanford write papers as if they had one , but , we 'll see . so w anyway . so that 's a major open issue . if there is an interpreter , it looks like , what eva 's got should run and we should be able to actually , try to solve , the problems , to actually take the data , and do it . , and we 'll see . , i actually think it is cleaner , and the ability to instantiate , instance of people and sites and , will help in the expression . whether the inference gets any faster or not i . , it would n't surprise me if it does n't . , it 's the same information . there are things that you can express this way which you ca n't express in a normal belief - net , without going to some incredible hacking of rebuilding it on the fly . , the notion of instantiating your el elements from the ontology and fits this very nicely and does n't fit very into the extended belief - net . so that was one of the main reasons for doing it . i . so , people who have thought about the problem , like robert i it looked to me like if eva were able to come up with a , value for each of a number of , sites plus its eva thing , that a travel planner should be able to take it from there . and , with some other information about how much time the person has and whatever , and then plan a route . grad b: - , , first of all , great looks , mu much cleaner , nnn , certain certain beauty in it , so , if beauty is truth , then , we 're in good shape . but , as , mentioned before we probably should look at t the details . so if you have a write - up then , i 'd love to read it and because , i can you go all the way back to the very top ? , these @ these w when these are instantiated they take on the same values ? that we had before ? grad d: some of the things might that might be different , maybe like are that the hours for the site . and , eventually i meant that to mean whether they 're open at this hour or not . and status would be , more or less like , whether they 're under construction , and or like that . grad b: and the , other question i would have is that presumably , from the way the stanford people talk about it , you can put the probabilities also on the relations . if professor f: , i that 's that was actually in the previous the ubenth . i do n't remember whether they carried that over to this or not , grad b: it 's in the definition or in the in daphne 's definition of a prm is that classes and relations , and you 're gon na have cpt 's over the classes and their relations . grad d: i remember them learning when , you the structure for , but i do n't remember reading how you specify professor f: so , the plan is when daphne gets back , we 'll get in touch and supposedly , we 'll actually get s deep connected to their work and somebody 'll , if it 's a group meeting once a week probably someone 'll go down and , whatever . so , we 'll actually figure all this out . grad b: then the w long term perspective is pretty clear . we get rocking and rolling on this again , once we get a package , if , when , and how , then this becomes foregrounded profiled , focused , again . grad b: until then we 'll come up with a something that 's @ that 's way more complicated for you . right ? because this was laughingly easy , grad d: actually i had to take out a lot of the complicated , cuz i made it really complicated in the beginning , and jerry was like , " this is just too much " . professor f: . so , you could , from this , go on and say suppose there 's a group of people traveling together and you wanted to plan something that somehow , with some pareto optimal , thing for professor f: that 's the that would be a , you could sell it , as a ok , you do n't have to fight about this , just give your preferences to the grad b: but what does it would a pote potential result be to split up and never talk to each other again ? professor f: anyway . so there i there are some u , , elaborations of this that you could try to put in to this structure , but i do n't think it 's worth it now . because we 're gon na see what else we 're gon na do . but , it 's good , and there were a couple other ideas of , things for eva to look at in the interim . grad b: good . then , we can move on and see what andreas has got out his sleeve . or andy , for that matter ? grad c: ok . so , for having me here , first of all . , so maybe just a little background on my visit . so , i ' m not really involved in any project , that 's relevant to you , a at the moment , the reason is really for me , to have an opportunity to talk to some other researchers in the field . and and so i 'll just n give you a real quick introduction to what i ' m working on , and , hope that you have some comments or , maybe you 're interested in it to find out more , and so i 'll be , happy to talk to you and , i 'd also like to find out some more and maybe i 'll just walk around the office and then and ask some questions , in a couple days . so i 'll be here for , tomorrow and then , the remainder of , next week . ok , so , what i started looking at , to begin with is just , content management systems , i in general . so , what 's the state of the art there is to you have a bunch of documents or learning units or learning objects , and you store meta - data , associate to them . so there 's some international standards like the i - triple - e , there 's an i - triple - e , lon standard , and , these fields are pretty straightforward , you have author information , you have , size information , format information and so on . , but they 're two fields that are , more interesting . one is you store keywords associated with the document , and one is , you have a , , what is the document about ? so it 's some taxonomic , ordering of the units . now , if you put on your semantic glasses , you say , that 's not all that easy , because there 's an implicit , assumption behind that is that , all the users of this system share the same interpretation of the keyword and the same interpretation of , whichever taxonomy is used , and , that 's a very that 's a key point of these systems and they always brush over this real quickly without really elaborating much of that and , the only thing that m really works out so far are library ordering codes , which are very , very coarse grain , so you have some like , science , biology , and then but that 's really all that we have at the moment . so there 's a huge , need for improvement there . now , what this a standard like this would give us is we could , with a search engine just query , different repositories all over the world . but we ca n't really , so what i ' m what i try to do is , to have , so . so the scenario is the following , you 're working on some project and you encounter a certain problem . now , what we have at our university quite a bit is that , students , try to u program a certain assignment , they always run into the same problems , and they always come running to us , and they 'll say why 's it not it 's not working , and we always give out the same answer , so we thought , it 'd be to have a system that could take care of this , and so , what i want to build is a smart f a q system . now , what you need to do here is you need to provide some context information which is more elaborate than " i ' m looking for this and this keyword . " so . and that i do n't need to tell you this . i ' m you have the same when somebody utters a sentence in a certain , context it , and the same sentence in another context makes a huge difference . so , i want to be able to model information like , so in the context of developing distributed systems , of a at a computer science school , what software is the person using , which homework assignment is he or she working on at the moment , maybe what 's the background of that student 's , which error message was encountered . so this information should be transmitted , when a certain document is retrieved . now , giving this so we somehow need to have a formalized , way of writing this down , and that 's where the shared interpretation of certain terms and keywords comes in again . and , using this and some , knowledge about the domain you can do some simple inferences . like that when somebody 's working about , working on servlets , he 's using java , cuz servlets are used are written in java . so some inferences like that , now , u using this you can infer more information , and you could then match this to the meta - data of off the documents you 're searching against . so , what i wanna do is have some given these inputs , and then compute how many documents match , and use this as a metric in the search . now , what i plan to do is i want to do a try to improve the quality of the search results , and i want to do this by having a depth , steepest descent approach . so if i knew which operating system the person was working on , would this improve my search result ? and and having , a symbolic formalized model of this i could simply compute that , and find out which questions are worth , asking . and that 's what i then propagate back to the user , and try to optimize the search in this way . now , the big problem that i ' m facing right now is , it 's fairly easy to hack up a system quickly , that works in the small domain , but the problem is the scalability . and , so robert was mentioning , earlier today is that , microsoft with their printer set up program has a bayesian network , which does exactly this , but there you face a problem that these are very hard to extend . and so , what i ' m what i try to do is try to model this , in a way that you could really combine , knowledge from very different sources , and , looking into some of the ideas that the semantic web community , came up with . trying to have , an approach how to integrate s certain representation of certain concepts and also some computational rules , what you can do with those . what i ' m also looking into is a probabilistic approach into this because document retrievals is a very fuzzy procedure , so it 's probably not that easy to simply have a symbolic , computational model . that that probably is n't expressive enough . so that 's another thing , which you 're also , looking into right now . and then , as an add - on to this whole idea , that would be now , depending on what the search engine or the content repository depending on which , rules and which ontologies it uses , or its view of the world , you can get very different results . so it might ma make a lot of sense to actually query a lot of different search engines . and there you could have an idea where you actually have a peer to peer approach , where we 're all carrying around our individual bookshelves , and , if you have a question about a homework , it 's probably makes sense to ask somebody who 's in your class with you , the guru in the certain area , rather than going to some yahoo - like , search engine . so these are some of the just in a nutshell , some of the ideas . and a lot of the even though it 's a very different domain , but a lot of the , issues are fairly similar . ok . grad a: and so some of the i how much about the larger heidelberg project , i are you grad a: so it seems like a lot of some of the issues are the same . it 's like , , the c context - based factors that influence how you interpret , grad a: , s how to interpret . in in this case , infer in knowing wanting to kinds of things to ask . we - we ' ve talked about that , but we have n't worried too much about that end of the discourse . but maybe you guys had that in the previous models . grad b: , in a in one t one s mmm , small difference in a way , is that he does n't have to come up with an answer , but he wants to point to the places w grad c: that can point you to the right reference . i do n't wanna compute the answer , so it 's a little bit easier for me . grad b: . , you have to s still m understand what the content says about itself , and then match it to what you think the informational needs grad a: so you also do n't have to figure out what the content is . you 're just taking the keywords as a topic text , as grad c: i assume that the there will be learning systems that tag their content . and , m @ and what i envision is that you rather than just supplying a bunch of keywords you could for an faq you could state like a logic condition , when this document applies . so " this document explains how to set up your , mail account on linux " like this . so something very specific that you can then but the that the key point with these , learning systems is that , a learning system is only as good as the amount of content it carries . grad c: you can have the best learning system with the best search interface , if there 's no content inside of it , it 's not very useful . so ultimately because , developing these rules and these inference inferences is very costly , so , you must be able to reuse some existing , domain information , or ontologies that other people wrote and then try to integrate them , and then also search the entire web , rather than just the small , content management system . so that 's crucial for the success of or @ grad a: so , you 're not i ' m trying to figure out how it maps to the kinds of things that we ' ve talked about in this group , and , actually associated groups , cuz some of us do pretty detailed linguistic analyses , and i ' m guessing that you wo n't be doing that ? professor f: on the other hand , framenet could be useful . so do the framenet story ? professor f: . th - that 's another thing you might wanna look into while you 're here . professor f: because , , the standard story is that keyworks keywords evoke frames , and the frames may give you additional keywords or , if that a bunch of keywords , indicate a frame , then you can find documents that actually have the whole frame , rather th than just , individual professor f: so there 's a lot of , and people are looking at that . most of the work here is just trying to get the frames right . there 's linguists and there 's a lot of it and they 're busily working away . but there are some application efforts trying to exploit it . and this looks t it seems to be that this is a place where you might be able to do that . grad c: . . i ' m i could learn a lot about , just how to come up with these structures , cuz it 's very easy to whip up something quickly , but it maybe then makes sense to me , but not to anybody else , and if we want to share and integrate things , they must be designed really . grad b: remember the , prashant story ? the no linguistic background person that the iu sent over here . and andreas and i tried to come up wi or we had come up actually with a with him working on an interface for framenet , as it was back then , that would p do some of the work for this machine , which , never got done because prashant found a happy occupation professor f: w , i know , it he w he did w what he did was much more s sensible for him . grad b: which in the . but so i ' m just saying , the , we had that idea professor f: you were y no , this was supposedly an exchange program , and i we , it 's fine . we do n't care , but it just i ' m a little surprised that , andreas did n't come up with anyone else he wanted to send . professor f: i had forgotten a i to be honest with you , i 'd forgotten we had a program . professor f: no , no . there was a whole co there was a little contract signed . it was grad c: , . it 's ju it 's more the lack of students , really , and w we have all these sponsors that are always eager to get some teams . grad c: but if i were a student , i 'd love to come here , rather than work for some german { nonvocalsound } company , or grad c: , i did n't say anybody to anything to offend , except for the sponsors maybe , but professor f: right . so i thi tha that 's one of the things that might be worth looking into while you 're here . , unfortunately , srini , who is heavily involved in daml and all this is himself out of town . professor f: anyway , s , you 'll see you 'll certainly see a lot of the people there . grad a: the other person of is dan gildea ? because he did some work on topic spotting grad a: w , which is , you . i do n't depending on how you wanna integrate with that end , like , taking the data and fig you said the learning systems that figure out we there 's someone in icsi who actually has been working on has worked on that kinda , and he 's worked with frame net , so you could talk to him about , both of those things at once . and he just finished writing a draft of his thesis . i u dan gildea , gildea . grad a: , if you fal solve the problem , hope you can do one for us too . professor f: alright , was there anything else for this ? one of these times soon we 're gon na hear about construal . grad b: . i ' m . i have it was november two thousand three or some no . wh - i had something in my calendar . grad b: , maybe i can bribe my way out of this . so i did some double checking and it seems like spring break in two thousand one . professor f: , no , but he 's as you said , he 's , like the state legislature , he 's trying to offer us bribes . grad a: at least this is a private meeting . right , exactly , ok , that 's the link . grad b: this , they refused the budget again ? is it so about citris ? , still nothing . professor f: , this t the s we 're , involved in a literally three hundred million dollar , program . , with the state of california . and , the state of california is now a month and a half behind its legis its legally required date to approve a budget . so the budget has not been approved . and two days ago there 's two l , so , two branches of legislature . one branch approved it , and , yesterdayday there was this that the other branch would just approve it , but now there 's actually a little back sliding to people who approved it got flak from there , anyway . so , ! i have to tell you a wonderful story about this , ok ? and then we 'll go . so , i it turns out i wound up having lunch today with a guy named tom kalil . kill kalil . and , he now works at berkeley . he 's hired to run a lot of citris , even though we do n't have the money they so they ' ve been hiring people right and left , so , they think the money 's coming . so and he was , the chief staffer to clinton on technology matters . he was in the white house , i do n't remember what he was saying . a anyway , like that . and , is now doing all the politics for citris , but also , has a , a lot of interest in , actually doing things for society , so digital divide and like that . so that 's interesting to me but maybe not to you . but the really interesting thing was , he st he s said something about , i ' m interested in things that have high social multiplier , something that is of great social value . he said , " , this was his only example , if you had a adult literacy program that was as good as an individual tutor , bed017fdialogueact717 214849 21506 f professor s^e : s +1 0 and as compelling as a video game , bed017fdialogueact718 215138 21547 f professor s^e : s +1 0 then that would have a huge social impact . i said , " great ! that 's a good problem to work on . " anyway . so it was that , he 's got this view , of a , that 's what you should try to do , and b , language would be a good way to do it . professor f: this was . i did n't push him on the ch on the child thing , but , , a again , if you and this was literacy , which actually is somewhat different problem . maybe easier . so this is reading , rather than teaching another project we started on , and did n't get funded for was , to try to build an automatic tutoring program , for kids whose first language was n't english . which is like half the school population in california . something like that , is n't it ? enormous problem in california , and the idea was if we 're so smart about language understanding and speech understanding , could n't we build , programs that would be tutors for the kids . we think we could . anyway . so so but this is a slightly different problem , i know none of us have the spare time to look at it right now , but it i it 's interesting and i may , talk to him some more about is somebody already doing this , and like that . so anyway , that was today 's little story . grad b: ok . so i did manage to get pull my head out of the sling by sidetracking into citris , grad b: but or a temporarily putting it out of the sling but , i 'll volunteer to put it right back in by stating that i am n among some other things in the process of writing up that we have been discussing at our daily meetings , and also revising , for all the comments , the c the original construal proposal . and , if i put one and one together , i may end up with a number that 's greater than one and that potentially present once you get back . professor f: anyway . , so ok , so that 'd be great , but i 'd it 's time again , right ? . ok . ###summary: the first phase of the data collection has finished. there is a new wizard for phase two , during which subjects will be given more complex scenarios. also finished are the modifications on smartkom: the remaining glitches will take no more than a day to iron out. a big part of the meeting was covered by the presentation of the prm of the proposed system. an alternative representation of the bayes-net , it depicts context features as classes , and dependencies as relations between them. the current outputs show the desirability of a site , as well as its eva mode. the fact that this model allows for instantiations of classes fits the research purposes much better than the extended belief-net. following this , a visiting researcher presented an overview of a parallel project at the international university. it attempts to build a smart tutoring system for a computer science course. the assumption is that document searches can give more personalised results , if they take into account contextual parameters ( user , situation ). although no detailed linguistic analysis takes place , it was suggested that the use of framenet could be a useful approach. there were also further suggestions for meetings with icsi researchers. as the data collection is going into its second phase , more complex scenarios will be used to generate more intricate dialogues. subjects can be recruited from within the psychology department students , since such participation in experiments is compulsory in their syllabus. as the work on creating a prm for the system has progressed , it is now necessary to find an implementation that can work as a prm interpreter. there are no plans to build one from scratch , but it seems that other research groups at stanford may already have one. moreover , closer collaboration with the local prm group will be pursued with additional participation in their meetings. an early version of the prm of the system presented the same problem as a flat bayes-net: the cpt's become too large. another prm issue that arose and remained unclear was how the probabilities are specified ( instead of learnt ) on the actual relations between the classes. furthermore , the lack of a prm interpreter is an open issue. it is not within the remit of the project to build one from scratch , therefore , an existing implementation has to be found. on the other hand , the discussion about the smart tutoring system being built at the international university showed the importance of finding out which context parameters are influential in a given domain. it is easy to hack up a system for a small domain , but making it scalable is much more difficult. finally , there was a passing mention of problems encountered with the speech synthesis module of smartkom. the first phase of the data collection has been completed. thirty subjects were recorded in total. of those dialogues , ten have been transcribed. a new wizard will carry out the second phase. on top of this , a presentation of a prm of the proposed system took place. the prm comprises a set of classes , such as "user" , "site" , "route" , "time" and "query" with relations between them. another class incorporates a number of attributes ( "money affordability" , "travel compatibility" etc ) modelling the intermediate nodes of the bayes-net , as well as references to the other classes in the model. this model is much cleaner , and it also makes it easier to specify the cpt's. at this stage , the model makes two decisions ( outputs ): how much a site fits a user's needs and what the user intention is in eva terms.
16
grad c: , we had a long discussion about how much w how easy we want to make it for people to bleep things out . so morgan wants to make it hard . phd d: - . so if you , if you breathe under breathe and then you see af go off , then it 's p picking up your mouth noise . phd f: , if you listen to just the channels of people not talking , it 's like " @ " . it 's very disgust phd f: exactly . it 's very disconcerting . ok . so , i was gon na try to get out of here , like , in half an hour , cuz i really appreciate people coming , and the main thing that i was gon na ask people to help with today is to give input on what kinds of database format we should use in starting to link up things like word transcripts and annotations of word transcripts , so anything that transcribers or discourse coders or whatever put in the signal , with time - marks for , like , words and phone boundaries and all the we get out of the forced alignments and the recognizer . so , we have this , a starting point is clearly the channelized output of dave gelbart 's program , which don brought a copy of , grad c: , i ' m familiar with that . , we i already have developed an xml format for this . grad c: and so the only question is it the thing that you want to use or not ? have you looked at that ? , i had a web page up . phd f: right . so , i actually mostly need to be able to link up , or i it 's a question both of what the representation is and grad c: you mean , this i am gon na be standing up and drawing on the board . grad c: , so it definitely had that as a concept . so tha it has a single time - line , grad c: and then you can have lots of different sections , each of which have i ds attached to it , and then you can refer from other sections to those i ds , if you want to . so that , so that you start with a time - line tag . time - line . and then you have a bunch of times . i do n't e i do n't remember exactly what my notation was , grad c: t equals one point three two , and then i also had optional things like accuracy , and then " id equals t one , one seven " . and then , { nonvocalsound } i also wanted to be i to be able to not specify specifically what the time was and just have a stamp . , so these are arbitrary , assigned by a program , not by a user . so you have a whole bunch of those . and then somewhere la further down you might have something like an utterance tag which has " start equals t - seventeen , end equals t - eighteen " . so what that 's saying is , we know it starts at this particular time . we when it ends . right ? but it ends at this t - eighteen , which may be somewhere else . we say there 's another utterance . we what the t time actually is but we know that it 's the same time as this end time . grad c: ok . yes , exactly . and then , and then these also have i ds . so you could have some other tag later in the file that would be something like , , i , { nonvocalsound } " noise - type equals { nonvocalsound } door - slam " . ? and then , { nonvocalsound } you could either say " time equals a particular time - mark " or you could do other sorts of references . so or you might have a prosody prosody right ? d ? t ? grad c: you like the d ? that 's a good d . , so you could have some type here , and then you could have , the utterance that it 's referring to could be u - seventeen like that . phd f: , that seems g great for all of the encoding of things with time and , phd f: i my question is more , what d what do you do with , say , a forced alignment ? phd f: you ' ve got all these phone labels , and what do you do if you just conceptually , if you get , transcriptions where the words are staying but the time boundaries are changing , cuz you ' ve got a new recognition output , or s what 's the , sequence of going from the waveforms that stay the same , the transcripts that may or may not change , and then the utterance which where the time boundaries that may or may not change ? phd a: , that 's that 's actually very nicely handled here because you could all you 'd have to change is the , time - stamps in the time - line without , changing the i ds . grad c: right . that 's , the who that 's why you do that extra level of indirection . so that you can just change the time - line . grad c: , this i do n't think i would do this for phone - level . for phone - level you want to use some binary representation because it 'll be too dense otherwise . phd f: so , if you were doing that and you had this companion , thing that gets called up for phone - level , what would that look like ? phd a: it 's just a matter of it being bigger . but if you have , barring memory limitations , or i w this is still the m grad c: it 's parsing limitations . i do n't want to have this text file that you have to read in the whole thing to do something very simple for . phd a: , no . you would use it only for purposes where you actually want the phone - level information , i 'd imagine . phd f: so you could have some file that configures how much information you want in your xml . grad c: i am imagining you 'd have multiple versions of this depending on the information that you want . grad c: i ' m just what i ' m wondering is whether for word - level , this would be ok . for word - level , it 's alright . grad c: for lower than word - level , you 're talking about so much data that i . i if that phd f: , we actually have so , one thing that don is doing , is we 're running for every frame , you get a pitch value , grad c: , that 's a , like it . it 's ics , icsi has a format for frame - level representation of features . phd f: that you could call that you would tie into this representation with like an id . grad c: so you would say " refer to this external file " . so that external file would n't be in phd d: but what 's the advantage of doing that versus just putting it into this format ? grad c: you do n't want to do it with that anything at frame - level you had better encode binary or it 's gon na be really painful . phd a: or you just compre , i like text formats . , b you can always , g - zip them , and , , c decompress them on the fly if y if space is really a concern . phd d: , i was thi i was thinking the advantage is that we can share this with other people . grad c: , but if you 're talking about one per frame , you 're talking about gigabyte - size files . you 're gon na actually run out of space in your filesystem for one file . phd a: right , ok . i would say ok , so frame - level is probably not a good idea . but for phone - level it 's perfectly phd a: but but most of the frames are actually not speech . so , people do n't v look at it , words times the average the average number of phones in an english word is , i , five maybe ? phd a: so , look at it , t number of words times five . that 's not that not phd f: , so you mean pause phones take up a lot of the long pause phones . phd f: that 's true . but you do have to keep them in there . y . grad c: so it 's debatable whether you want to do phone - level in the same thing . but , a anything at frame - level , even p - file , is too verbose . phd f: do you are you familiar with it ? i have n't seen this particular format , phd d: but , a minute , p - file for each frame is storing a vector of cepstral or plp values , grad c: so that what 's about the p - file it i built into it is the concept of frames , utterances , sentences , that thing , that structure . and then also attached to it is an arbitrary vector of values . and it can take different types . so it th they do n't all have to be floats . , you can have integers and you can have doubles , and all that . grad c: and it has a header format that describes it to some extent . so , the only problem with it is it 's actually storing the utterance numbers and the frame numbers in the file , even though they 're always sequential . and so it does waste a lot of space . but it 's still a lot tighter than ascii . and we have a lot of tools already to deal with it . grad c: there 's a ton of it . man - pages and , source code , and me . phd f: great . , that sounds good . i was just looking for something i ' m not a database person , but something standard enough that , if we start using this we can give it out , other people can work on it , grad c: and , we have a - configured system that you can distribute for free , and phd d: , it must be the equivalent of whatever you guys used to store feat your computed features in , right ? phd a: but , so there is something like that but it 's , probably not as sophist phd a: they ha it has its own , entropic has their own feature format that 's called , like , s - sd or some so sf like that . grad c: i ' m just wondering , would it be worth while to use that instead ? phd f: actually , we ' ve done this on prosodics and three or four places have asked for those prosodic files , and we just have an ascii , output of frame - by - frame . phd f: which is fine , but it gets unwieldy to go in and query these files with really huge files . , we could do it . i was just thinking if there 's something that where all the frame values are grad c: and a , if you have a two - hour - long meeting , that 's gon na phd f: and these are for ten - minute switchboard conversations , and so it 's doable , it 's just that you can only store a feature vector at frame - by - frame and it does n't have any , phd d: is is the sharing part of this a pretty important consideration or does that just , a thing to have ? phd f: i enough about what we 're gon na do with the data . but it would be good to get something that we can that other people can use or adopt for their own kinds of encoding . and just , we have to use some we have to make some decision about what to do . and especially for the prosody work , what it ends up being is you get features from the signal , and those change every time your alignments change . so you re - run a recognizer , you want to recompute your features , and then keep the database up to date . or you change a word , or you change a utterance boundary segment , which is gon na happen a lot . and so i wanted something where all of this can be done in a elegant way and that if somebody wants to try something or compute something else , that it can be done flexibly . it does n't have to be pretty , it just has to be , easy to use , and grad c: , the other thing we should look at atlas , the nist thing , and see if they have anything at that level . , i ' m not what to do about this with atlas , because they chose a different route . i chose something that th - there are two choices . your your file format can know about know that you 're talking about language and speech , which is what i chose , and time , or your file format can just be a graph representation . and then the application has to impose the structure on top . so what it looked like atlas chose is , they chose the other way , which was their file format is just nodes and links , and you have to interpret what they mean yourself . grad c: , because i knew that we were doing speech , and it was better if you 're looking at a raw file to be t for the tags to say " it 's an utterance " , as opposed to the tag to say " it 's a link " . so , but grad c: , the other thing is if we choose to use atlas , which maybe we should just do , we should just throw this out before we invest a lot of time in it . phd f: i do n't so this is what the meeting 's about , just how to , cuz we need to come up with a database like this just to do our work . and i actually do n't care , as long as it 's something useful to other people , what we choose . so maybe it 's maybe oth , if you have any idea of how to choose , cuz i do n't . grad c: , i chose this for a couple reasons . one of them is that it 's easy to parse . you do n't need a full xml parser . it 's very easy to just write a perl script to parse it . phd f: and you can have as much information in the tag as you want , right ? grad c: , i have it structured . so each type tag has only particular items that it can take . grad c: if you have more information . so what what nist would say is that instead of doing this , you would say something like " link { nonvocalsound } start equals , , some node id , end equals some other node id " , and then " type " would be " utterance " . , so it 's very similar . phd f: so why would it be a waste to do it this way if it 's similar enough that we can always translate it ? phd d: it probably would n't be a waste . it would mean that at some point if we wanted to switch , we 'd just have to translate everything . grad c: they 're developing a big infrastructure . and so it seems to me that if we want to use that , we might as go directly to what they 're doing , rather than phd a: if we want to do they already have something that 's that would be useful for us in place ? phd d: see , that 's the question . , how stable is their are they ready to go , grad c: the last time i looked at it was a while ago , probably a year ago , when we first started talking about this . and at that time at least it was still not very complete . and so , specifically they did n't have any external format representation at that time . they just had the conceptual node , annotated transcription graph , which i really liked . and that 's exactly what this is based on . since then , they ' ve developed their own external file format , which is , , this s this thing . , and they ' ve also developed a lot of tools , but i have n't looked at them . maybe i should . phd f: , would the tools run on something like this , if you can translate them anyway ? phd a: if it 's conceptually close , and they already have or will have tools that everybody else will be using , it would be crazy to do something s , separate that phd f: actually , so it 's that would really be the question , is just what you would feel is in the long run the best thing . phd f: cuz once we start , doing this i do n't we do n't actually have enough time to probably have to rehash it out again grad c: the other thing the other way that i established this was as easy translation to and from the transcriber format . phd f: , i like this . this is intuitively easy to actually r read , as easy it could as it could be . but , i suppose that as long as they have a type here that specifies " utt " , grad c: the with this , though , is that you ca n't really add any supplementary information . so if you suddenly decide that you want phd f: , if you look at it i in my mind i enough jane would know better , about the types of annotations and but i imagine that those are things that would , you guys mentioned this , that could span any it could be in its own channel , it could span time boundaries of any type , it could be instantaneous , things like that . and then from the recognition side we have backtraces at the phone - level . if if it can handle that , it could handle states or whatever . and then at the prosody - level we have frame like cepstral feature files , like these p - files or anything like that . and that 's the world of things that i and then we have the aligned channels , phd a: it seems to me you want to keep the frame - level separate . and then phd f: i definitely agree and i wanted to find actually a f a nicer format or a maybe a more compact format than what we used before . just cuz you ' ve got ten channels or whatever and two hours of a meeting . it 's it 's a lot of phd a: now now how would you represent , multiple speakers in this framework ? were you would just represent them as you would have like a speaker tag ? grad c: there 's a spea speaker tag up at the top which identifies them and then each utt the way i had it is each turn or each utterance , i do n't even remember now , had a speaker id tag attached to it . and in this format you would have a different tag , which would , be linked to the link . so so somewhere else you would have another thing that would be , let 's see , would it be a node or a link ? and so this one would have , an id is link seventy - four like that . and then somewhere up here you would have a link that , , was referencing l - seventy - four and had speaker adam . phd a: but but so how in the nist format do we express a hierarchical relationship between , say , an utterance and the words within it ? so how do you tell that these are the words that belong to that utterance ? grad c: , you would have another structure lower down than this that would be saying they 're all belonging to this id . phd f: and what if you actually have so right now what you have as utterance , the closest thing that comes out of the channelized is the between the segment boundaries that the transcribers put in or that thilo put in , which may or may not actually be , like , a s it 's usually not , the beginning and end of a sentence , say . phd f: , so it 's like a segment . , i assume this is possible , that if you have someone annotates the punctuation or whatever when they transcribe , you can say , from for from the c beginning of the sentence to the end of the sentence , from the annotations , this is a unit , even though it never actually i it 's only a unit by virtue of the annotations at the word - level . grad c: you 'd have another tag which says this is of type " sentence " . and , what phd f: that should be possible as long as the but , what i do n't understand is where the where in this type of file that would be expressed . grad c: you would have another tag somewhere . it 's , there 're two ways of doing it . phd f: s so it would just be floating before the sentence or floating after the sentence without a time - mark . grad c: you could have some link type equals " sentence " , and id is " s - whatever " . and then lower down you could have an utterance . so the type is " utterance " equals " utt " . and you could either say that no . i grad c: i take that back . can you can you say that this is part of this , phd a: that some something may be a part of one thing for one purpose and another thing of another purpose . so f grad c: there 's one level there 's one more level of indirection that i ' m forgetting . phd a: suppose you have a word sequence and you have two different segmentations of that same word sequence . f say , one segmentation is in terms of , , sentences . and another segmentation is in terms of , i , prosodic phrases . and let 's say that they do n't nest . so , a prosodic phrase may cross two sentences . i if that 's true or not but let 's as phd f: , it 's definitely true with the segment . that 's what i exactly what i meant by the utterances versus the sentence could be phd a: so , you want to be s you want to say this word is part of that sentence and this prosodic phrase . but the phrase is not part of the sentence and neither is the sentence part of the phrase . grad c: i ' m pretty that you can do that , but i ' m forgetting the exact level of nesting . phd a: so , you would have to have two different pointers from the word up one level up , one to the sent grad c: so so what you would end up having is a tag saying " here 's a word , and it starts here and it ends here " . and then lower down you would say " here 's a prosodic boundary and it has these words in it " . and lower down you 'd have " here 's a sentence , phd f: an - right . so you would be able to go in and say , " give me all the words in the bound in the prosodic phrase and give me all the words in the " phd a: the the o the other issue that you had was , how do you actually efficiently extract , find and extract information in a structure of this type ? phd f: , and , you guys might i if this is premature because i suppose once you get the representation you can do this , but the kinds of things i was worried about is , phd a: y you got ta do this you 're gon na want to do this very quickly or else you 'll spend all your time searching through very complex data structures phd f: you 'd need a p a paradigm for how to do it . but an example would be " find all the cases in which adam started to talk while andreas was talking and his pitch was rising , andreas 's pitch " . that thing . grad c: , that 's gon na be is the rising pitch a feature , or is it gon na be in the same file ? phd f: , the rising pitch will never be hand - annotated . so the all the prosodic features are going to be automatically grad c: right ? because you 're gon na have to write a program that goes through your feature file and looks for rising pitches . phd f: so normally what we would do is we would say " what do we wanna assign rising pitch to ? " are we gon na assign it to words ? are we gon na just assign it to when it 's rising we have a begin - end rise representation ? but suppose we dump out this file and we say , for every word we just classify it as , w , rise or fall or neither ? phd f: so we would be , taking the format and enriching it with things that we wanna query in relation to the words that are already in the file , and then querying it . phd a: you want a grep that 's that works at the structural on the structural representation . grad c: you have that . there 's a standard again in xml , specifically for searching xml documents structured x - xml documents , where you can specify both the content and the structural position . phd a: , but it 's not clear that 's that 's relative to the structure of the xml document , grad c: you use it as a tool . you use it as a tool , not an end - user . it 's not an end - user thing . it 's it 's you would use that to build your tool to do that search . phd a: be because here you 're specifying a lattice . so the underlying that 's the underlying data structure . and you want to be able to search in that lattice . grad c: no , no . the whole point is that the text and the lattice are isomorphic . they represent each other completely . so that th phd f: that 's true if the features from your acoustics or whatever that are not explicitly in this are at the level of these types . that that if you can do that grad c: , but that 's gon na be the trouble no matter what . no matter what format you choose , you 're gon na have the trou you 're gon na have the difficulty of relating the frame - level features phd f: that 's right . that 's why i was trying to figure out what 's the best format for this representation . and it 's still gon na be , not direct . , it or another example was , , where in the language where in the word sequence are people interrupting ? i that one 's actually easier . phd d: what about what about , the idea of using a relational database to , store the information from the xml ? so you would have xml would , you could use the xml to put the data in , and then when you get data out , you put it back in xml . so use xml as the transfer format , phd d: , but then you store the data in the database , which allows you to do all kinds of good search things in there . grad c: the , one of the things that atlas is doing is they 're trying to define an api which is independent of the back store , so that , you could define a single api and the storage could be flat xml files or a database . my opinion on that is for the s that we 're doing , i suspect it 's overkill to do a full relational database , that , just a flat file and , search tools i bet will be enough . grad c: but that 's the advantage of atlas , is that if we actually take decide to go that route completely and we program to their api , then if we wanted to add a database later it would be pretty easy . phd f: it seems like the thing you 'd do if i , if people start adding all kinds of s bells and whistles to the data . and so that might be , it 'd be good for us to know to use a format where we know we can easily , input that to some database if other people are using it . something like that . grad c: i ' m just a little hesitant to try to go whole hog on the whole framework that nist is talking about , with atlas and a database and all that , cuz it 's a big learning curve , just to get going . whereas if we just do a flat file format , it may not be as efficient but everyone can program in perl and use it . phd a: i ' m still , not convinced that you can do much on the text on the flat file that , the text representation . e because the text representation is gon na be , not reflecting the structure of your words and annotations . it 's just it 's grad c: , if it 's not representing it , then how do you recover it ? it 's representing it . phd a: y . you can use perl to read it in and construct a internal representation that is essentially a lattice . but , the and then grad c: for perl if you want to just do perl . if you wanted to use the structured xml query language , that 's a different thing . and it 's a set of tools that let you specify given the d - ddt dtd of the document , what sorts of structural searches you want to do . so you want to say that , you 're looking for , a tag within a particular tag that has this particular text in it , and , refers to a particular value . and so the point is n't that an end - user , who is looking for a query like you specified , would n't program it in this language . what you would do is , someone would build a tool that used that as a library . so that they so that you would n't have to construct the internal representations yourself . phd f: is a see , the kinds of questions , at least in the next to the end of this year , are there may be a lot of different ones , but they 'll all have a similar nature . they 'll be looking at either a word - level prosodic , an a value , phd f: like a continuous value , like the slope of something . but , we 'll do something where we some data reduction where the prosodic features are sort o , either at the word - level or at the segment - level , they 're not gon na be at the phone - level and they 're no not gon na be at the frame - level when we get done with giving them simpler shapes and things . and so the main thing is just being able , i , the two goals . , one that chuck mentioned is starting out with something that we do n't have to start over , that we do n't have to throw away if other people want to extend it for other kinds of questions , and being able to at least get enough , information out on where we condition the location of features on information that 's in the file that you put up there . and that would do it , grad c: and then there are long - term , big - infrastructure solutions . and so we want to try to pick something that lets us do a little bit of both . phd f: in the between , and especially that the representation does n't have to be thrown away , even if your tools change . grad c: and so it seems to me that , i have to look at it again to see whether it can really do what we want , but if we use the atlas external file representation , it seems like it 's rich enough that you could do quick tools just as i said in perl , and then later on if we choose to go up the learning curve , we can use the whole atlas inter infrastructure , phd f: if you would l look at that and let us you think . , we 're guinea pigs , cuz i want to get the prosody work done but i do n't want to waste time , getting the grad c: , i would n't for the formats , because anything you pick we 'll be able to translate to another form . phd a: ma , maybe you should actually look at it yourself too to get a sense of what it is you 'll be dealing with , because , , adam might have one opinion but you might have another , phd f: especially if there 's , e , if someone can help with at least the setup of the right phd f: , hi . the right representation , then , i , i hope it wo n't we do n't actually need the whole full - blown thing to be ready , phd f: so maybe if you guys can look at it and see what , we 're we 're actually just phd f: wrapping up , but , it 's a short meeting , i . is there anything else , like that helps me a lot , grad c: , the other thing we might want to look at is alternatives to p - file . , th the reason i like p - file is i ' m already familiar with it , we have expertise here , and so if we pick something else , there 's the learning - curve problem . but , it is just something we developed at icsi . grad c: , that 's gon na be a problem no matter what . you have the two - gigabyte limit on the filesystem size . and we definitely hit that with broadcast news . phd a: maybe you could extend the api to , support , like splitting up , conceptually one file into smaller files on disk so that you can essentially , have arbitrarily long f grad c: most of the tools can handle that . we did n't do it at the api - level . we did it at the t tool - level . that that most many of them can s you can specify several p - files and they 'll just be done sequentially . phd f: so , i , if you and don can if you can show him the p - file and see . so this would be like for the f - zero grad c: , if you do " man p - file " or " apropos p - file " , you 'll see a lot . grad b: i ' ve used the p - file , . i ' ve looked at it at least , briefly , when we were doing s something . grad c: i did n't de i did n't develop it . , it was dave johnson . so it 's all part of the quicknet library . it has all the utilities for it . phd a: no , p - files were around way before quicknet . p - files were around when w with , rap . grad c: but there are ni they 're the quicknet library has a bunch of things in it to handle p - files , so it works pretty . phd f: and that is n't really , i , as important as the main i what you call it , the main word - level phd f: , that 's really useful . , this is exactly the thing that i wanted to settle . so phd f: i it 's also a political deci , if you feel like that 's a community that would be good to tie into anyway , then it 's sounds like it 's worth doing . grad c: and , w , as i said , i what i did with this i based it on theirs . it 's just they had n't actually come up with an external format yet . so now that they have come up with a format , it does n't it seems pretty reasonable to use it . but let me look at it again . as i said , that grad c: there 's one level there 's one more level of indirection and i ' m just blanking on exactly how it works . i got ta look at it again . phd f: , we can start with , i , this input from dave 's , which you had printed out , the channelized input . cuz he has all of the channels , with the channels in the tag and like that . phd f: and so then it would just be a matter of getting making to handle the annotations that are , not at the word - level and , t to import the phd f: any annotation that , like , is n't already there . , anything you can envision . postdoc e: so what i was imagining was , so dave says we can have unlimited numbers of green ribbons . and so put , a green ribbon on for an overlap code . and since we w we it 's important to remain flexible regarding the time bins for now . and so it 's to have however , you want to have it , time , located in the discourse . so , if we tie the overlap code to the first word in the overlap , then you 'll have a time - marking . it wo n't it 'll be independent of the time bins , however these e evolve , shrink , or whatever , increase , or also , you could have different time bins for different purposes . and having it tied to the first word in an overlap segment is unique , , anchored , clear . and it would just end up on a separate ribbon . so the overlap coding is gon na be easy with respect to that . you look puzzled . postdoc e: but let me just no . w the idea is just to have a separate green ribbon , and let 's say that this is a time bin . there 's a word here . this is the first word of an overlapping segment of any length , overlapping with any other , word , i segment of any length . and , then you can indicate that this here was perhaps a ch a backchannel , or you can say that it was , a usurping of the turn , or you can , any number of categories . but the fact is , you have it time - tagged in a way that 's independent of the , sp particular time bin that the word ends up in . if it 's a large unit or a small unit , or we sh change the boundaries of the units , it 's still unique and , fits with the format , flexible , all that . phd a: it would be , gr this is r regarding , it 's related but not directly germane to the topic of discussion , but , when it comes to annotations , you often find yourself in the situation where you have different annotations of the same , say , word sequence . ok ? and sometimes the word sequences even differ slightly because they were edited s at one place but not the other . so , once this data gets out there , some people might start annotating this for , i , dialogue acts or , , topics or what the heck . there 's a zillion things that people might annotate this for . and the only thing that is really common among all the versi the various versions of this data is the word sequence , or approximately . phd a: or the times . but , see , if you 'd annotate dialogue acts , you do n't necessarily want to or topics you do n't really want to be dealing with time - marks . phd a: you 'd it 's much more efficient for them to just see the word sequence , right ? , most people are n't as sophisticated as we are here with , , time alignments and . so the phd a: the p my point is that you 're gon na end up with , word sequences that are differently annotated . and you want some tool , that is able to merge these different annotations back into a single , version . , and we had this problem very massively , at sri when we worked , a while back on , on dialogue acts as , , what was it ? phd a: utterance types . there 's , automatic , punctuation and like that . because we had one set of annotations that were based on , one version of the transcripts with a particular segmentation , and then we had another version that was based on , a different s slightly edited version of the transcripts with a different segmentation . so , we had these two different versions which were , you could tell they were from the same source but they were n't identical . so it was extremely hard to reliably merge these two back together to correlate the information from the different annotations . grad c: i do n't see any way that file formats are gon na help us with that . it 's it 's all a question of semantic . phd a: but once you have a file format , imagine writing not personally , but someone writing a tool that is essentially an alignment tool , that mediates between various versions , and , like th , you have this thing in unix where you have , diff . phd a: there 's the , diff that actually tries to reconcile different two diffs f based on the same original . phd a: something like that , but operating on these lattices that are really what 's behind this , this annotation format . grad c: there 's actually a diff library you can use to do things like that so you have different formats . phd a: so somewhere in the api you would like to have like a merge or some function that merges two versions . grad c: , it 's gon na be very hard . any structured anything when you try to merge is really , really hard because you ha i the hard part is n't the file format . the hard part is specifying what you mean by " merge " . phd f: but the one thing that would work here actually for i that is more reliable than the utterances is the speaker ons and offs . so if you have a good , grad c: the problem is saying " what are the semantics , what do you mean by " merge " ? " phd a: so so just to let what we where we kluged it by , doing , by doing hhh . both were based on words , so , bo we have two versions of the same words intersp , sprinkled with different tags for annotations . phd a: but , it had lots of errors and things would end up in the wrong order , and . so , if you had a more it was a kluge because it was reducing everything to , to textual alignment . phd f: d is n't that something where whoever if the people who are making changes , say in the transcripts , cuz this all happened when the transcripts were different ye , if they tie it to something , like if they tied it to the acoustic segment if they what ? then or if they tied it to an acoustic segment and we had the time - marks , that would help . but the problem is exactly as adam said , that you get , y you do n't have that information or it 's lost in the merge somehow , postdoc e: , can i ask one question ? it it seems to me that , we will have o an official version of the corpus , which will be only one version in terms of the words where the words are concerned . we 'd still have the merging issue maybe if coding were done independently of the phd a: because if the data gets out , people will do all kinds of things to it . and , s , several years from now you might want to look into , the prosody of referring expressions . and someone at the university of who knows where has annotated the referring expressions . so you want to get that annotation and bring it back in line with your data . phd f: but they ' ve also and so that 's exactly what we should somehow when you distribute the data , say that , that have some way of knowing how to merge it back in and asking people to try to do that . grad c: so so imagine his example is a good one . imagine that this person who developed the corpus of the referring expressions did n't include time . he included references to words . postdoc e: but then could n't you just indirectly figure out the time tied to the word ? postdoc e: not but you 'd have some anchoring point . he could n't have changed all the words . grad c: but they could have changed it a little . the , that they may have annotated it off a word transcript that is n't the same as our word transcript , so how do you merge it back in ? i understand what you 're saying . and i the answer is , it 's gon na be different every time . it 's j it 's just gon na be i it 's exactly what i said before , grad c: which is that " what do you mean by " merge " ? " so in this case where you have the words and you do n't have the times , what do you mean by " merge " ? if you tell me what you mean , write a program to do it . phd f: you can merge at the level of the representation that the other person preserved and that 's it . grad c: so so in this one you would have to do a best match between the word sequences , grad c: extract the times f from the best match of theirs to yours , and use that . phd f: and then infer that their time - marks are somewhere in between . , exactly . postdoc e: but it could be that they just , it could be that they chunked they lost certain utterances and all that , phd f: , i , w i did n't want to keep people too long and adam wanted t people i 'll read the digits . if anyone else offers to , that 'd be great . phd a: for th for the { nonvocalsound } for the benefit of science we 'll read the digits . phd f: a lot . it 's really helpful . , adam and don { nonvocalsound } will meet and that 's great . very useful . go next .
two main options were discussed as to the organisation of the collected data. on the one hand , a bespoke xml structure that connects transcriptions and annotations ( down to the word-level ) to a common timeline. its advantages are that it is easier to read , parse , map onto the transcriber format and to expand with extra features. phone-level analysis can be included in the same structure , or in a separate , linked file. the respective frame-level representation can be handled by p-files , a technology developed at icsi , which also comes with a library of tools. separation of levels of analysis makes files more compact and manageable. xml standards offer libraries that can be used for the development of search tools. on the other hand , the atlas ( nist ) technology offers a very similar , but more generic organisational scheme based on nodes and links. these are labeled with domain specific types , like "utterance" or "speaker". this option offer well-developed infrastructure and flexibility as to the type of data storage ( flat xml files or relational database ). in either case , it is important for the chosen format to allow for fast searches , flexible updates and , if possible , be reusable in future work. in order to confirm the suitability of the data format provided by the atlas project , its current state of development will be investigated. more specifically , the issues that have to be ascertained are , firstly , whether the external file representation offers a format that would be appropriate for speech data , and , secondly , how the linking between the different annotations ( eg , between word-level representations and prosodic-feature structures ) can be achieved. regardless of the actual format , however , there was consensus that keeping levels of analysis ( words , phones , frames , etc ) on separate , inter-linked files can make their management easier. choosing a project-specific format for the representation of the data might not be optimal for future work. on the other hand , it is not yet clear whether a more standardised , but generic technology , like that of the atlas project , can accommodate all the requirements of speech analysis. regardless of the particular format , including all annotations ( sentences , words , phones , frames , etc ) in one file could result in unmanageable file sizes. searching , updating or simply parsing a file for a simple task can become an unwieldy process. even p-files , which are only for frame-level annotation , may be too verbose for the amount of data resulting from hour-long recordings. the actual mapping of word-level transcriptions to frame-level representations is expected to be problematic anyway. likewise , problems will arise if , in the future , slightly different transcripts of the same data are annotated in formats that do not include time-marks. trying to merge such annotations later will not be easy , because of the combination of transcription discrepancies with the loss of the underlying connection offered by the time-marks. an xml scheme to build representations of the data is already available. it incorporates information regarding utterances , sentences , speakers , words , etc. all these features are linked together via time-marks that slot into a single , common timeline. this format also allows for linking to different levels of representation of the same data. for the frame-level representation , p-files is a readily available technology , developed at icsi. besides the appropriate format , p-files come with a library of tools and the respective documentation.
###dialogue: grad c: , we had a long discussion about how much w how easy we want to make it for people to bleep things out . so morgan wants to make it hard . phd d: - . so if you , if you breathe under breathe and then you see af go off , then it 's p picking up your mouth noise . phd f: , if you listen to just the channels of people not talking , it 's like " @ " . it 's very disgust phd f: exactly . it 's very disconcerting . ok . so , i was gon na try to get out of here , like , in half an hour , cuz i really appreciate people coming , and the main thing that i was gon na ask people to help with today is to give input on what kinds of database format we should use in starting to link up things like word transcripts and annotations of word transcripts , so anything that transcribers or discourse coders or whatever put in the signal , with time - marks for , like , words and phone boundaries and all the we get out of the forced alignments and the recognizer . so , we have this , a starting point is clearly the channelized output of dave gelbart 's program , which don brought a copy of , grad c: , i ' m familiar with that . , we i already have developed an xml format for this . grad c: and so the only question is it the thing that you want to use or not ? have you looked at that ? , i had a web page up . phd f: right . so , i actually mostly need to be able to link up , or i it 's a question both of what the representation is and grad c: you mean , this i am gon na be standing up and drawing on the board . grad c: , so it definitely had that as a concept . so tha it has a single time - line , grad c: and then you can have lots of different sections , each of which have i ds attached to it , and then you can refer from other sections to those i ds , if you want to . so that , so that you start with a time - line tag . time - line . and then you have a bunch of times . i do n't e i do n't remember exactly what my notation was , grad c: t equals one point three two , and then i also had optional things like accuracy , and then " id equals t one , one seven " . and then , { nonvocalsound } i also wanted to be i to be able to not specify specifically what the time was and just have a stamp . , so these are arbitrary , assigned by a program , not by a user . so you have a whole bunch of those . and then somewhere la further down you might have something like an utterance tag which has " start equals t - seventeen , end equals t - eighteen " . so what that 's saying is , we know it starts at this particular time . we when it ends . right ? but it ends at this t - eighteen , which may be somewhere else . we say there 's another utterance . we what the t time actually is but we know that it 's the same time as this end time . grad c: ok . yes , exactly . and then , and then these also have i ds . so you could have some other tag later in the file that would be something like , , i , { nonvocalsound } " noise - type equals { nonvocalsound } door - slam " . ? and then , { nonvocalsound } you could either say " time equals a particular time - mark " or you could do other sorts of references . so or you might have a prosody prosody right ? d ? t ? grad c: you like the d ? that 's a good d . , so you could have some type here , and then you could have , the utterance that it 's referring to could be u - seventeen like that . phd f: , that seems g great for all of the encoding of things with time and , phd f: i my question is more , what d what do you do with , say , a forced alignment ? phd f: you ' ve got all these phone labels , and what do you do if you just conceptually , if you get , transcriptions where the words are staying but the time boundaries are changing , cuz you ' ve got a new recognition output , or s what 's the , sequence of going from the waveforms that stay the same , the transcripts that may or may not change , and then the utterance which where the time boundaries that may or may not change ? phd a: , that 's that 's actually very nicely handled here because you could all you 'd have to change is the , time - stamps in the time - line without , changing the i ds . grad c: right . that 's , the who that 's why you do that extra level of indirection . so that you can just change the time - line . grad c: , this i do n't think i would do this for phone - level . for phone - level you want to use some binary representation because it 'll be too dense otherwise . phd f: so , if you were doing that and you had this companion , thing that gets called up for phone - level , what would that look like ? phd a: it 's just a matter of it being bigger . but if you have , barring memory limitations , or i w this is still the m grad c: it 's parsing limitations . i do n't want to have this text file that you have to read in the whole thing to do something very simple for . phd a: , no . you would use it only for purposes where you actually want the phone - level information , i 'd imagine . phd f: so you could have some file that configures how much information you want in your xml . grad c: i am imagining you 'd have multiple versions of this depending on the information that you want . grad c: i ' m just what i ' m wondering is whether for word - level , this would be ok . for word - level , it 's alright . grad c: for lower than word - level , you 're talking about so much data that i . i if that phd f: , we actually have so , one thing that don is doing , is we 're running for every frame , you get a pitch value , grad c: , that 's a , like it . it 's ics , icsi has a format for frame - level representation of features . phd f: that you could call that you would tie into this representation with like an id . grad c: so you would say " refer to this external file " . so that external file would n't be in phd d: but what 's the advantage of doing that versus just putting it into this format ? grad c: you do n't want to do it with that anything at frame - level you had better encode binary or it 's gon na be really painful . phd a: or you just compre , i like text formats . , b you can always , g - zip them , and , , c decompress them on the fly if y if space is really a concern . phd d: , i was thi i was thinking the advantage is that we can share this with other people . grad c: , but if you 're talking about one per frame , you 're talking about gigabyte - size files . you 're gon na actually run out of space in your filesystem for one file . phd a: right , ok . i would say ok , so frame - level is probably not a good idea . but for phone - level it 's perfectly phd a: but but most of the frames are actually not speech . so , people do n't v look at it , words times the average the average number of phones in an english word is , i , five maybe ? phd a: so , look at it , t number of words times five . that 's not that not phd f: , so you mean pause phones take up a lot of the long pause phones . phd f: that 's true . but you do have to keep them in there . y . grad c: so it 's debatable whether you want to do phone - level in the same thing . but , a anything at frame - level , even p - file , is too verbose . phd f: do you are you familiar with it ? i have n't seen this particular format , phd d: but , a minute , p - file for each frame is storing a vector of cepstral or plp values , grad c: so that what 's about the p - file it i built into it is the concept of frames , utterances , sentences , that thing , that structure . and then also attached to it is an arbitrary vector of values . and it can take different types . so it th they do n't all have to be floats . , you can have integers and you can have doubles , and all that . grad c: and it has a header format that describes it to some extent . so , the only problem with it is it 's actually storing the utterance numbers and the frame numbers in the file , even though they 're always sequential . and so it does waste a lot of space . but it 's still a lot tighter than ascii . and we have a lot of tools already to deal with it . grad c: there 's a ton of it . man - pages and , source code , and me . phd f: great . , that sounds good . i was just looking for something i ' m not a database person , but something standard enough that , if we start using this we can give it out , other people can work on it , grad c: and , we have a - configured system that you can distribute for free , and phd d: , it must be the equivalent of whatever you guys used to store feat your computed features in , right ? phd a: but , so there is something like that but it 's , probably not as sophist phd a: they ha it has its own , entropic has their own feature format that 's called , like , s - sd or some so sf like that . grad c: i ' m just wondering , would it be worth while to use that instead ? phd f: actually , we ' ve done this on prosodics and three or four places have asked for those prosodic files , and we just have an ascii , output of frame - by - frame . phd f: which is fine , but it gets unwieldy to go in and query these files with really huge files . , we could do it . i was just thinking if there 's something that where all the frame values are grad c: and a , if you have a two - hour - long meeting , that 's gon na phd f: and these are for ten - minute switchboard conversations , and so it 's doable , it 's just that you can only store a feature vector at frame - by - frame and it does n't have any , phd d: is is the sharing part of this a pretty important consideration or does that just , a thing to have ? phd f: i enough about what we 're gon na do with the data . but it would be good to get something that we can that other people can use or adopt for their own kinds of encoding . and just , we have to use some we have to make some decision about what to do . and especially for the prosody work , what it ends up being is you get features from the signal , and those change every time your alignments change . so you re - run a recognizer , you want to recompute your features , and then keep the database up to date . or you change a word , or you change a utterance boundary segment , which is gon na happen a lot . and so i wanted something where all of this can be done in a elegant way and that if somebody wants to try something or compute something else , that it can be done flexibly . it does n't have to be pretty , it just has to be , easy to use , and grad c: , the other thing we should look at atlas , the nist thing , and see if they have anything at that level . , i ' m not what to do about this with atlas , because they chose a different route . i chose something that th - there are two choices . your your file format can know about know that you 're talking about language and speech , which is what i chose , and time , or your file format can just be a graph representation . and then the application has to impose the structure on top . so what it looked like atlas chose is , they chose the other way , which was their file format is just nodes and links , and you have to interpret what they mean yourself . grad c: , because i knew that we were doing speech , and it was better if you 're looking at a raw file to be t for the tags to say " it 's an utterance " , as opposed to the tag to say " it 's a link " . so , but grad c: , the other thing is if we choose to use atlas , which maybe we should just do , we should just throw this out before we invest a lot of time in it . phd f: i do n't so this is what the meeting 's about , just how to , cuz we need to come up with a database like this just to do our work . and i actually do n't care , as long as it 's something useful to other people , what we choose . so maybe it 's maybe oth , if you have any idea of how to choose , cuz i do n't . grad c: , i chose this for a couple reasons . one of them is that it 's easy to parse . you do n't need a full xml parser . it 's very easy to just write a perl script to parse it . phd f: and you can have as much information in the tag as you want , right ? grad c: , i have it structured . so each type tag has only particular items that it can take . grad c: if you have more information . so what what nist would say is that instead of doing this , you would say something like " link { nonvocalsound } start equals , , some node id , end equals some other node id " , and then " type " would be " utterance " . , so it 's very similar . phd f: so why would it be a waste to do it this way if it 's similar enough that we can always translate it ? phd d: it probably would n't be a waste . it would mean that at some point if we wanted to switch , we 'd just have to translate everything . grad c: they 're developing a big infrastructure . and so it seems to me that if we want to use that , we might as go directly to what they 're doing , rather than phd a: if we want to do they already have something that 's that would be useful for us in place ? phd d: see , that 's the question . , how stable is their are they ready to go , grad c: the last time i looked at it was a while ago , probably a year ago , when we first started talking about this . and at that time at least it was still not very complete . and so , specifically they did n't have any external format representation at that time . they just had the conceptual node , annotated transcription graph , which i really liked . and that 's exactly what this is based on . since then , they ' ve developed their own external file format , which is , , this s this thing . , and they ' ve also developed a lot of tools , but i have n't looked at them . maybe i should . phd f: , would the tools run on something like this , if you can translate them anyway ? phd a: if it 's conceptually close , and they already have or will have tools that everybody else will be using , it would be crazy to do something s , separate that phd f: actually , so it 's that would really be the question , is just what you would feel is in the long run the best thing . phd f: cuz once we start , doing this i do n't we do n't actually have enough time to probably have to rehash it out again grad c: the other thing the other way that i established this was as easy translation to and from the transcriber format . phd f: , i like this . this is intuitively easy to actually r read , as easy it could as it could be . but , i suppose that as long as they have a type here that specifies " utt " , grad c: the with this , though , is that you ca n't really add any supplementary information . so if you suddenly decide that you want phd f: , if you look at it i in my mind i enough jane would know better , about the types of annotations and but i imagine that those are things that would , you guys mentioned this , that could span any it could be in its own channel , it could span time boundaries of any type , it could be instantaneous , things like that . and then from the recognition side we have backtraces at the phone - level . if if it can handle that , it could handle states or whatever . and then at the prosody - level we have frame like cepstral feature files , like these p - files or anything like that . and that 's the world of things that i and then we have the aligned channels , phd a: it seems to me you want to keep the frame - level separate . and then phd f: i definitely agree and i wanted to find actually a f a nicer format or a maybe a more compact format than what we used before . just cuz you ' ve got ten channels or whatever and two hours of a meeting . it 's it 's a lot of phd a: now now how would you represent , multiple speakers in this framework ? were you would just represent them as you would have like a speaker tag ? grad c: there 's a spea speaker tag up at the top which identifies them and then each utt the way i had it is each turn or each utterance , i do n't even remember now , had a speaker id tag attached to it . and in this format you would have a different tag , which would , be linked to the link . so so somewhere else you would have another thing that would be , let 's see , would it be a node or a link ? and so this one would have , an id is link seventy - four like that . and then somewhere up here you would have a link that , , was referencing l - seventy - four and had speaker adam . phd a: but but so how in the nist format do we express a hierarchical relationship between , say , an utterance and the words within it ? so how do you tell that these are the words that belong to that utterance ? grad c: , you would have another structure lower down than this that would be saying they 're all belonging to this id . phd f: and what if you actually have so right now what you have as utterance , the closest thing that comes out of the channelized is the between the segment boundaries that the transcribers put in or that thilo put in , which may or may not actually be , like , a s it 's usually not , the beginning and end of a sentence , say . phd f: , so it 's like a segment . , i assume this is possible , that if you have someone annotates the punctuation or whatever when they transcribe , you can say , from for from the c beginning of the sentence to the end of the sentence , from the annotations , this is a unit , even though it never actually i it 's only a unit by virtue of the annotations at the word - level . grad c: you 'd have another tag which says this is of type " sentence " . and , what phd f: that should be possible as long as the but , what i do n't understand is where the where in this type of file that would be expressed . grad c: you would have another tag somewhere . it 's , there 're two ways of doing it . phd f: s so it would just be floating before the sentence or floating after the sentence without a time - mark . grad c: you could have some link type equals " sentence " , and id is " s - whatever " . and then lower down you could have an utterance . so the type is " utterance " equals " utt " . and you could either say that no . i grad c: i take that back . can you can you say that this is part of this , phd a: that some something may be a part of one thing for one purpose and another thing of another purpose . so f grad c: there 's one level there 's one more level of indirection that i ' m forgetting . phd a: suppose you have a word sequence and you have two different segmentations of that same word sequence . f say , one segmentation is in terms of , , sentences . and another segmentation is in terms of , i , prosodic phrases . and let 's say that they do n't nest . so , a prosodic phrase may cross two sentences . i if that 's true or not but let 's as phd f: , it 's definitely true with the segment . that 's what i exactly what i meant by the utterances versus the sentence could be phd a: so , you want to be s you want to say this word is part of that sentence and this prosodic phrase . but the phrase is not part of the sentence and neither is the sentence part of the phrase . grad c: i ' m pretty that you can do that , but i ' m forgetting the exact level of nesting . phd a: so , you would have to have two different pointers from the word up one level up , one to the sent grad c: so so what you would end up having is a tag saying " here 's a word , and it starts here and it ends here " . and then lower down you would say " here 's a prosodic boundary and it has these words in it " . and lower down you 'd have " here 's a sentence , phd f: an - right . so you would be able to go in and say , " give me all the words in the bound in the prosodic phrase and give me all the words in the " phd a: the the o the other issue that you had was , how do you actually efficiently extract , find and extract information in a structure of this type ? phd f: , and , you guys might i if this is premature because i suppose once you get the representation you can do this , but the kinds of things i was worried about is , phd a: y you got ta do this you 're gon na want to do this very quickly or else you 'll spend all your time searching through very complex data structures phd f: you 'd need a p a paradigm for how to do it . but an example would be " find all the cases in which adam started to talk while andreas was talking and his pitch was rising , andreas 's pitch " . that thing . grad c: , that 's gon na be is the rising pitch a feature , or is it gon na be in the same file ? phd f: , the rising pitch will never be hand - annotated . so the all the prosodic features are going to be automatically grad c: right ? because you 're gon na have to write a program that goes through your feature file and looks for rising pitches . phd f: so normally what we would do is we would say " what do we wanna assign rising pitch to ? " are we gon na assign it to words ? are we gon na just assign it to when it 's rising we have a begin - end rise representation ? but suppose we dump out this file and we say , for every word we just classify it as , w , rise or fall or neither ? phd f: so we would be , taking the format and enriching it with things that we wanna query in relation to the words that are already in the file , and then querying it . phd a: you want a grep that 's that works at the structural on the structural representation . grad c: you have that . there 's a standard again in xml , specifically for searching xml documents structured x - xml documents , where you can specify both the content and the structural position . phd a: , but it 's not clear that 's that 's relative to the structure of the xml document , grad c: you use it as a tool . you use it as a tool , not an end - user . it 's not an end - user thing . it 's it 's you would use that to build your tool to do that search . phd a: be because here you 're specifying a lattice . so the underlying that 's the underlying data structure . and you want to be able to search in that lattice . grad c: no , no . the whole point is that the text and the lattice are isomorphic . they represent each other completely . so that th phd f: that 's true if the features from your acoustics or whatever that are not explicitly in this are at the level of these types . that that if you can do that grad c: , but that 's gon na be the trouble no matter what . no matter what format you choose , you 're gon na have the trou you 're gon na have the difficulty of relating the frame - level features phd f: that 's right . that 's why i was trying to figure out what 's the best format for this representation . and it 's still gon na be , not direct . , it or another example was , , where in the language where in the word sequence are people interrupting ? i that one 's actually easier . phd d: what about what about , the idea of using a relational database to , store the information from the xml ? so you would have xml would , you could use the xml to put the data in , and then when you get data out , you put it back in xml . so use xml as the transfer format , phd d: , but then you store the data in the database , which allows you to do all kinds of good search things in there . grad c: the , one of the things that atlas is doing is they 're trying to define an api which is independent of the back store , so that , you could define a single api and the storage could be flat xml files or a database . my opinion on that is for the s that we 're doing , i suspect it 's overkill to do a full relational database , that , just a flat file and , search tools i bet will be enough . grad c: but that 's the advantage of atlas , is that if we actually take decide to go that route completely and we program to their api , then if we wanted to add a database later it would be pretty easy . phd f: it seems like the thing you 'd do if i , if people start adding all kinds of s bells and whistles to the data . and so that might be , it 'd be good for us to know to use a format where we know we can easily , input that to some database if other people are using it . something like that . grad c: i ' m just a little hesitant to try to go whole hog on the whole framework that nist is talking about , with atlas and a database and all that , cuz it 's a big learning curve , just to get going . whereas if we just do a flat file format , it may not be as efficient but everyone can program in perl and use it . phd a: i ' m still , not convinced that you can do much on the text on the flat file that , the text representation . e because the text representation is gon na be , not reflecting the structure of your words and annotations . it 's just it 's grad c: , if it 's not representing it , then how do you recover it ? it 's representing it . phd a: y . you can use perl to read it in and construct a internal representation that is essentially a lattice . but , the and then grad c: for perl if you want to just do perl . if you wanted to use the structured xml query language , that 's a different thing . and it 's a set of tools that let you specify given the d - ddt dtd of the document , what sorts of structural searches you want to do . so you want to say that , you 're looking for , a tag within a particular tag that has this particular text in it , and , refers to a particular value . and so the point is n't that an end - user , who is looking for a query like you specified , would n't program it in this language . what you would do is , someone would build a tool that used that as a library . so that they so that you would n't have to construct the internal representations yourself . phd f: is a see , the kinds of questions , at least in the next to the end of this year , are there may be a lot of different ones , but they 'll all have a similar nature . they 'll be looking at either a word - level prosodic , an a value , phd f: like a continuous value , like the slope of something . but , we 'll do something where we some data reduction where the prosodic features are sort o , either at the word - level or at the segment - level , they 're not gon na be at the phone - level and they 're no not gon na be at the frame - level when we get done with giving them simpler shapes and things . and so the main thing is just being able , i , the two goals . , one that chuck mentioned is starting out with something that we do n't have to start over , that we do n't have to throw away if other people want to extend it for other kinds of questions , and being able to at least get enough , information out on where we condition the location of features on information that 's in the file that you put up there . and that would do it , grad c: and then there are long - term , big - infrastructure solutions . and so we want to try to pick something that lets us do a little bit of both . phd f: in the between , and especially that the representation does n't have to be thrown away , even if your tools change . grad c: and so it seems to me that , i have to look at it again to see whether it can really do what we want , but if we use the atlas external file representation , it seems like it 's rich enough that you could do quick tools just as i said in perl , and then later on if we choose to go up the learning curve , we can use the whole atlas inter infrastructure , phd f: if you would l look at that and let us you think . , we 're guinea pigs , cuz i want to get the prosody work done but i do n't want to waste time , getting the grad c: , i would n't for the formats , because anything you pick we 'll be able to translate to another form . phd a: ma , maybe you should actually look at it yourself too to get a sense of what it is you 'll be dealing with , because , , adam might have one opinion but you might have another , phd f: especially if there 's , e , if someone can help with at least the setup of the right phd f: , hi . the right representation , then , i , i hope it wo n't we do n't actually need the whole full - blown thing to be ready , phd f: so maybe if you guys can look at it and see what , we 're we 're actually just phd f: wrapping up , but , it 's a short meeting , i . is there anything else , like that helps me a lot , grad c: , the other thing we might want to look at is alternatives to p - file . , th the reason i like p - file is i ' m already familiar with it , we have expertise here , and so if we pick something else , there 's the learning - curve problem . but , it is just something we developed at icsi . grad c: , that 's gon na be a problem no matter what . you have the two - gigabyte limit on the filesystem size . and we definitely hit that with broadcast news . phd a: maybe you could extend the api to , support , like splitting up , conceptually one file into smaller files on disk so that you can essentially , have arbitrarily long f grad c: most of the tools can handle that . we did n't do it at the api - level . we did it at the t tool - level . that that most many of them can s you can specify several p - files and they 'll just be done sequentially . phd f: so , i , if you and don can if you can show him the p - file and see . so this would be like for the f - zero grad c: , if you do " man p - file " or " apropos p - file " , you 'll see a lot . grad b: i ' ve used the p - file , . i ' ve looked at it at least , briefly , when we were doing s something . grad c: i did n't de i did n't develop it . , it was dave johnson . so it 's all part of the quicknet library . it has all the utilities for it . phd a: no , p - files were around way before quicknet . p - files were around when w with , rap . grad c: but there are ni they 're the quicknet library has a bunch of things in it to handle p - files , so it works pretty . phd f: and that is n't really , i , as important as the main i what you call it , the main word - level phd f: , that 's really useful . , this is exactly the thing that i wanted to settle . so phd f: i it 's also a political deci , if you feel like that 's a community that would be good to tie into anyway , then it 's sounds like it 's worth doing . grad c: and , w , as i said , i what i did with this i based it on theirs . it 's just they had n't actually come up with an external format yet . so now that they have come up with a format , it does n't it seems pretty reasonable to use it . but let me look at it again . as i said , that grad c: there 's one level there 's one more level of indirection and i ' m just blanking on exactly how it works . i got ta look at it again . phd f: , we can start with , i , this input from dave 's , which you had printed out , the channelized input . cuz he has all of the channels , with the channels in the tag and like that . phd f: and so then it would just be a matter of getting making to handle the annotations that are , not at the word - level and , t to import the phd f: any annotation that , like , is n't already there . , anything you can envision . postdoc e: so what i was imagining was , so dave says we can have unlimited numbers of green ribbons . and so put , a green ribbon on for an overlap code . and since we w we it 's important to remain flexible regarding the time bins for now . and so it 's to have however , you want to have it , time , located in the discourse . so , if we tie the overlap code to the first word in the overlap , then you 'll have a time - marking . it wo n't it 'll be independent of the time bins , however these e evolve , shrink , or whatever , increase , or also , you could have different time bins for different purposes . and having it tied to the first word in an overlap segment is unique , , anchored , clear . and it would just end up on a separate ribbon . so the overlap coding is gon na be easy with respect to that . you look puzzled . postdoc e: but let me just no . w the idea is just to have a separate green ribbon , and let 's say that this is a time bin . there 's a word here . this is the first word of an overlapping segment of any length , overlapping with any other , word , i segment of any length . and , then you can indicate that this here was perhaps a ch a backchannel , or you can say that it was , a usurping of the turn , or you can , any number of categories . but the fact is , you have it time - tagged in a way that 's independent of the , sp particular time bin that the word ends up in . if it 's a large unit or a small unit , or we sh change the boundaries of the units , it 's still unique and , fits with the format , flexible , all that . phd a: it would be , gr this is r regarding , it 's related but not directly germane to the topic of discussion , but , when it comes to annotations , you often find yourself in the situation where you have different annotations of the same , say , word sequence . ok ? and sometimes the word sequences even differ slightly because they were edited s at one place but not the other . so , once this data gets out there , some people might start annotating this for , i , dialogue acts or , , topics or what the heck . there 's a zillion things that people might annotate this for . and the only thing that is really common among all the versi the various versions of this data is the word sequence , or approximately . phd a: or the times . but , see , if you 'd annotate dialogue acts , you do n't necessarily want to or topics you do n't really want to be dealing with time - marks . phd a: you 'd it 's much more efficient for them to just see the word sequence , right ? , most people are n't as sophisticated as we are here with , , time alignments and . so the phd a: the p my point is that you 're gon na end up with , word sequences that are differently annotated . and you want some tool , that is able to merge these different annotations back into a single , version . , and we had this problem very massively , at sri when we worked , a while back on , on dialogue acts as , , what was it ? phd a: utterance types . there 's , automatic , punctuation and like that . because we had one set of annotations that were based on , one version of the transcripts with a particular segmentation , and then we had another version that was based on , a different s slightly edited version of the transcripts with a different segmentation . so , we had these two different versions which were , you could tell they were from the same source but they were n't identical . so it was extremely hard to reliably merge these two back together to correlate the information from the different annotations . grad c: i do n't see any way that file formats are gon na help us with that . it 's it 's all a question of semantic . phd a: but once you have a file format , imagine writing not personally , but someone writing a tool that is essentially an alignment tool , that mediates between various versions , and , like th , you have this thing in unix where you have , diff . phd a: there 's the , diff that actually tries to reconcile different two diffs f based on the same original . phd a: something like that , but operating on these lattices that are really what 's behind this , this annotation format . grad c: there 's actually a diff library you can use to do things like that so you have different formats . phd a: so somewhere in the api you would like to have like a merge or some function that merges two versions . grad c: , it 's gon na be very hard . any structured anything when you try to merge is really , really hard because you ha i the hard part is n't the file format . the hard part is specifying what you mean by " merge " . phd f: but the one thing that would work here actually for i that is more reliable than the utterances is the speaker ons and offs . so if you have a good , grad c: the problem is saying " what are the semantics , what do you mean by " merge " ? " phd a: so so just to let what we where we kluged it by , doing , by doing hhh . both were based on words , so , bo we have two versions of the same words intersp , sprinkled with different tags for annotations . phd a: but , it had lots of errors and things would end up in the wrong order , and . so , if you had a more it was a kluge because it was reducing everything to , to textual alignment . phd f: d is n't that something where whoever if the people who are making changes , say in the transcripts , cuz this all happened when the transcripts were different ye , if they tie it to something , like if they tied it to the acoustic segment if they what ? then or if they tied it to an acoustic segment and we had the time - marks , that would help . but the problem is exactly as adam said , that you get , y you do n't have that information or it 's lost in the merge somehow , postdoc e: , can i ask one question ? it it seems to me that , we will have o an official version of the corpus , which will be only one version in terms of the words where the words are concerned . we 'd still have the merging issue maybe if coding were done independently of the phd a: because if the data gets out , people will do all kinds of things to it . and , s , several years from now you might want to look into , the prosody of referring expressions . and someone at the university of who knows where has annotated the referring expressions . so you want to get that annotation and bring it back in line with your data . phd f: but they ' ve also and so that 's exactly what we should somehow when you distribute the data , say that , that have some way of knowing how to merge it back in and asking people to try to do that . grad c: so so imagine his example is a good one . imagine that this person who developed the corpus of the referring expressions did n't include time . he included references to words . postdoc e: but then could n't you just indirectly figure out the time tied to the word ? postdoc e: not but you 'd have some anchoring point . he could n't have changed all the words . grad c: but they could have changed it a little . the , that they may have annotated it off a word transcript that is n't the same as our word transcript , so how do you merge it back in ? i understand what you 're saying . and i the answer is , it 's gon na be different every time . it 's j it 's just gon na be i it 's exactly what i said before , grad c: which is that " what do you mean by " merge " ? " so in this case where you have the words and you do n't have the times , what do you mean by " merge " ? if you tell me what you mean , write a program to do it . phd f: you can merge at the level of the representation that the other person preserved and that 's it . grad c: so so in this one you would have to do a best match between the word sequences , grad c: extract the times f from the best match of theirs to yours , and use that . phd f: and then infer that their time - marks are somewhere in between . , exactly . postdoc e: but it could be that they just , it could be that they chunked they lost certain utterances and all that , phd f: , i , w i did n't want to keep people too long and adam wanted t people i 'll read the digits . if anyone else offers to , that 'd be great . phd a: for th for the { nonvocalsound } for the benefit of science we 'll read the digits . phd f: a lot . it 's really helpful . , adam and don { nonvocalsound } will meet and that 's great . very useful . go next . ###summary: two main options were discussed as to the organisation of the collected data. on the one hand , a bespoke xml structure that connects transcriptions and annotations ( down to the word-level ) to a common timeline. its advantages are that it is easier to read , parse , map onto the transcriber format and to expand with extra features. phone-level analysis can be included in the same structure , or in a separate , linked file. the respective frame-level representation can be handled by p-files , a technology developed at icsi , which also comes with a library of tools. separation of levels of analysis makes files more compact and manageable. xml standards offer libraries that can be used for the development of search tools. on the other hand , the atlas ( nist ) technology offers a very similar , but more generic organisational scheme based on nodes and links. these are labeled with domain specific types , like "utterance" or "speaker". this option offer well-developed infrastructure and flexibility as to the type of data storage ( flat xml files or relational database ). in either case , it is important for the chosen format to allow for fast searches , flexible updates and , if possible , be reusable in future work. in order to confirm the suitability of the data format provided by the atlas project , its current state of development will be investigated. more specifically , the issues that have to be ascertained are , firstly , whether the external file representation offers a format that would be appropriate for speech data , and , secondly , how the linking between the different annotations ( eg , between word-level representations and prosodic-feature structures ) can be achieved. regardless of the actual format , however , there was consensus that keeping levels of analysis ( words , phones , frames , etc ) on separate , inter-linked files can make their management easier. choosing a project-specific format for the representation of the data might not be optimal for future work. on the other hand , it is not yet clear whether a more standardised , but generic technology , like that of the atlas project , can accommodate all the requirements of speech analysis. regardless of the particular format , including all annotations ( sentences , words , phones , frames , etc ) in one file could result in unmanageable file sizes. searching , updating or simply parsing a file for a simple task can become an unwieldy process. even p-files , which are only for frame-level annotation , may be too verbose for the amount of data resulting from hour-long recordings. the actual mapping of word-level transcriptions to frame-level representations is expected to be problematic anyway. likewise , problems will arise if , in the future , slightly different transcripts of the same data are annotated in formats that do not include time-marks. trying to merge such annotations later will not be easy , because of the combination of transcription discrepancies with the loss of the underlying connection offered by the time-marks. an xml scheme to build representations of the data is already available. it incorporates information regarding utterances , sentences , speakers , words , etc. all these features are linked together via time-marks that slot into a single , common timeline. this format also allows for linking to different levels of representation of the same data. for the frame-level representation , p-files is a readily available technology , developed at icsi. besides the appropriate format , p-files come with a library of tools and the respective documentation.
9
grad e: ok , so for for people wearing the wireless mikes , like this one , i find the easiest way to wear it is sorta this sorta like that . grad e: it 's actually a lot more comfortable then if you try to put it over your temples , grad e: and then also , for all of them , if your boom is adjustable , the boom should be towards the corner of your mouth , grad e: and about a thumb to a thumb and a half distance away from your mouth , phd d: that 's interesting . so why did n't you get the same results and the unadapted ? grad e: everyone should have at least two forms possibly three in front of you depending on who you are . we 're doing a new speaker form and you only have to spea fill out the speaker form once but everyone does need to do it . and so that 's the name , sex , email , et cetera . grad e: we we had a lot of discussion about the variety of english and so on so if you what to put just leave it blank . i designed the form and i what to put for my own region , phd a: may i make one suggestion ? instead of age put date of year of birth because age will change , but the year of birth changes , stays the same , usually . postdoc g: course on the other hand you could you view it as the age at the time of the postdoc g: yes , but what we care about is the age at the recording date rather than the grad e: either way i think age is alright and then there will be attached to this a point or two these forms so that you 'll be able to extract the date off that grad e: so , anyway . and so then you also have a digits form which needs to be filled out every time , the speaker form only once , the digit form every time even if you do n't read the digits you have to fill out the digits form so that we know that you were at the meeting . ok ? and then also if you have n't filled one out already you do have to fill out a consent form . and that should just be one person whose name i . grad e: and i ' m liz and andreas wanna talk about recognition results . anything else ? phd c: , i sent out an email s couple hours ago so with andreas ' help andreas put together a no frills recognizer which is gender - dependent but like no adaptation , no cross - word models , no trigrams a bigram recognizer and that 's trained on switchboard which is telephone conversations . and to don 's help wh who don took the first meeting that jane had transcribed and separated used the individual channels we segmented it in into the segments that jane had used and don sampled that so eight k and then we ran up to i the first twenty minutes , up to synch time of one two zero so is that 's twenty minutes or so ? because i there 's some , phd c: and don can talk to jane about this , there 's some bug in the actual synch time file that i ' m we 're not where it came from but after that was a little messier . anyway so it 's twenty minutes and i actually phd a: but i looking at the first sentences looked much better than that and then suddenly it turned very bad and then we noticed that the reference was always one off with the it was actually recognized phd c: no actually it was i it was a complicated bug because they were sometimes one off and then sometimes random phd a: so so we have everything recognized but we scored only the first whatever , up to that time to postdoc g: we should say something about the glitch . he he can say something about the glitch . grad e: so the that problem has gone away in the original driver believe it or not when the ssh key gen ran the driver paused for a fraction of a second and so the channels get a little asynchronous and so if you listen to it in the middle there 's a little part where it starts doing click sounds . phd c: what happens is it actually affects the script that don if we know about it then i it could always be checked for it postdoc g: it it had no effect on my transcription , i had no trouble hearing it and having time bins grad e: i do remember seeing once the transcriber produce an incorrect xml file where one of the synch numbers was incorrect . phd c: and so then , you look at that and it 's got more than three significant digits in a synch time then that ca n't be right grad f: non - zero ? there are like more cuz there 's a lot of zeros i tacked on just because of the way the script ran , phd c: that would really be a problem , so anyway these are just the ones that are the prebug for one meeting . and what 's which phd c: this is really encouraging cuz this is free recognition , there 's no the language model for switchboard is different so you can see some like this trent lott which phd c: there 's a lot of perfect ones and good ones and all the references , you can read them and when we get more results you can look through and see phd a: there are a fair number of errors that are , where got the plural s wrong or the inflection on the verb wrong . grad e: , and who cares ? and and there were lots the " - s , " in on " - s " of " - s . phd c: there 's no those are actually a lot of the errors are out of vocabulary , so is it like pzm is three words , it 's pzm , there 's nothing there 's no language model for pzm or grad e: right . ri - ri right . did you say there 's no language for pzm ? grad e: do you mean so every time someone says pzm it 's an error ? maybe we should n't say pzm in these meetings . phd c: so , the bottom line is even though it 's not a huge amount of data it should be reasonable to actually run recognition and be like within the scope of r reasonable s switchboard this is like h about how we do on switchboard - two data with the switchboard - one trained mostly trained recognizer and switchboard - two is got a different population of speakers and a different topic phd c: and they 're talking about things in the news that happened after switchboard - one so there was @ so that 's great . professor b: so we 're in better shape than we were say when we did had the ninety - three workshop and we were all getting like seventy percent error on switchboard . phd c: especially i with jane that would help for since we have this new data now in order to go from the transcripts more easily to just the words that the recognizer would use for scoring . i had to deal with some of it by hand but a lot of it can be automated s by professor b: one thing i did n't get so the language model was straight from bigram from switchboard the acoustic models were also from switchboard or so they did n't have anything from this acoustic data in yet ? phd c: so that 's the on that 's the only acoustic training data that we have a lot of phd c: and i ramana , so a guy at sri said that there 's not a huge amount of difference going from phd c: it 's not like we probably lose a huge amount but we wo n't know because we do n't have any full band models for s conversational speech . phd d: it 's probably not as bad as going f using full band models on telephone band speech professor b: but for broadcast news when we played around between the two there was n't a huge loss . phd a: it 's also there 's actually more data is from broadcast news but with a little less weight because phd a: our complete system starts by doing ge a gender detection so just for the heck of it i ran that phd a: and it might be reassuring for everybody to know that it got all the genders right . phd c: and that 's interesting cuz the their language models are quite different and i ' m pretty from listening to eric that , given the words he was saying and given his pronunciation that the reason that he 's so much worse is the lapel . phd c: so it 's now if we can just eliminate the lapel one when we get new microphones professor b: cuz he certainly in that when as a burp user he was a pretty strong one . phd c: he sounded to me just from he sounded like a , what 's it a sheep or a goat ? phd c: right . sounded good . right so i the good news is that this is without a lot of the bells and whistles that we c can do with the sri system and we 'll have more data and we can also start to maybe adapt the language models once we have enough meetings . so this is only twenty minutes of one meeting with no tailoring . phd a: clearly there are with just a small amount of actual meeting transcriptions thrown into the language model you can probably do quite a bit better because the phd c: pzm and then there 's things like for the transcription i got when someone has a digit in the transcript i if they said , one or eleven and i if they said tcl or tcl . there 's things like that where , the we 'll probably have to ask the transcribers to indicate some of those kinds of things but in general it was really good and i ' m hoping and this is good news because that means the force alignments should be good and if the force alignments , it 's good news anyway but if the force alignments are good we can get all kinds of information . about , prosodic information and speaker overlaps and directly from the aligned times . so that 'll be something that actually in order to assess the forced alignment we need s some linguists or some people to look at it and say are these boundaries in about the right place . because it 's just gon na give us time marks phd c: so this would be like if you take the words and force align them on all the individual close talk close talking mikes then how good are these in reality and then i was thinking it grad e: so we might want to take twenty minutes and do a closer word level transcription . maybe actually mark the word boundaries . phd c: or i have someone look at the alignments maybe a linguist who can say roughly if these are ok and how far away they are . but it 's got ta be pretty good because otherwise the word recognition would be really b crummy . phd c: it would n't necessarily be the other way around , if the wor word recognition was crummy the alignment might be ok but if the word recognition is this good the alignment should be pretty good . so that 's about it . phd d: i wonder if this is a good thing or a bad thing though , if we 're pr grad e: do n't worry about it w d that 's the close talking mikes . try it on the p z ms and professor b: there 's still just the w the percentages and , they 're not a as we ' ve talked about before there 's probably overlaps professor b: there 's probably overlaps in fair number in switchboard as but there 's other phenomena , it 's a meeting , it 's a different thing and there 's lots of to learn with the close talking mikes certainly i 'd like to see as soon as we could , maybe get some of the glitches out of the way but soon as we could how it does with say with the p z ms or maybe even one of the and see if it 's , is it a hundred twenty percent or maybe it 's not maybe if with some adaptation you get this down to fifty percent or forty - five percent and then if for the pzm it 's seventy like that 's actually something we could work with a little bit phd c: no it 's really , this way we least have a baseline we know that the transcripts are very good so once you can get to the words that the recognizer which is a total subset of the things you need to understand the text they 're pretty good so and it 's converting automatically from the xml to the chopping up the wave forms and it 's not the case that the end of one utterance is in the next segment and things like that which we had more problems with in switchboard and let 's see there was one more thing i wanted to mention i ca n't remember . anyway it 's phd c: so andreas set up this recognizer and the recognizer all the files i ' m moving to sri and running everything there so i brought back just these result files and people can look at them phd a: we we talked about setting up the sri recognizer here . that 's if there are more machines here plus people can could run their own variants of the recognition runs certainly doable . postdoc g: i need t . i need to ask one question . which is so this issue of the legalistic aspects of the pre - sent pre - adapted , so what is the data that you take into sri , first question , you 're maintaining it in a place that would n't be publicly readable that , right ? grad e: it 's just the audio data itself , until people have a chance to edit it . phd c: so i can protect my directories through there . right now they 're not they 're in the speech group directories which so i will professor b: so we just have to go through this process of having people approve the transcriptions , say it 's ok . postdoc g: , we had to get them to approve and then i cuz the other question i was gon na ask is if we 're having it 's but this meeting that you have , no problem cuz i speak for myself postdoc g: but that we did n't do anything that but anyway so i would n't be too concerned about it with respect to that although we should clear it with eric and dan but these results are based on data which have n't had the chance to be reviewed by the subjects postdoc g: and i how that stands , if you get fantastic results and it 's involving data which later end up being lessened by , certain elisions , then i but i wanted to raise that issue , professor b: once we get all this streamlined it may be sh it hopefully it will be fairly quick but we get the transcriptions , people approve them and so on it 's just that we 're grad e: alright we need to work at a system for doing that approval so that we can send people the transcripts and get back any bleeps that they want professor b: it 's gon na be a rare thing that there 's a bleep for the most part . phd a: u actually i had a question about the downsampling , i who , how this was done but is there are there any issues with downsampling phd a: because i know that the recognizer that we use h can do it on the fly so we would n't have to have it do it explicitly beforehand . and is there any i are there other d sev is there more than one way to do the downsampling where one might be better than another ? grad f: there are lots of w there are lots of ways to do the downsampling different filters to put on , like anti - aliasing . grad e: i do n't think we even know which one i assume you 're using syncat to do it ? phd a: so so the other thing we should try is to just take the original wave forms , phd a: and and feed them to the sri recognizer and see if the sri front - end does something . phd c: they 're just bigger to transfer , that 's why i s downsampled them before but grad f: although those eighty meg files take a while to copy into my directories so , but no , it 's not i it would n't be a problem if you 're interested in it phd a: and and if for some reason we see that it works better then we might investigate why phd c: so we could try that with this particular twenty minutes of speech and see if there 's any differences . phd a: a at some point someone might have optimized whatever filtering is done for the actual recognition performance . so in other words right , grad e: it just seems to me that , small changes to the language model and the vocabulary will so swamp that it may be premature to worry about that . so one is a half a percent better than the other i do n't think that gives you any information . phd c: it 's just as easy to give you the sixteen k individual , it was just more disk space for storing them professor b: plp uses auto - regressive filtering and modeling and so it can be sensitive to the filtering that you 're doing but mel cepstrum might not b you would n't expect to be so much phd c: we can try it if you generate like the same set of files just up to that point where we stopped anyway and just sti stick them somewhere phd a: do n't stop . do n't stop at that part because we 're actually using the entire conversation to estimate the speaker parameters , grad f: , i 'll i have to do is e the reference file would stay the same , it 's just the individual segments would be approximately twice as long and i could just replace them with the bigger ones in the directory , phd c: i hand - edited the whole meeting so that can be run it 's just once we get the bug out . postdoc g: one one question which is i had the impression from this meeting that w that i transcribed that there was already automatic downsampling occurring , is that in order to so it was so it 's like there 's already down professor b: this is being recorded at forty - eight kilohertz . which is more that anybody needs phd c: and that 's actually said in your meeting , that 's how i know that . professor b: it 's a digital audio orientation for the board it 's in the monitor so it 's professor b: so it 's just that they were operating from switchboard which was a completely telephone database phd c: and i if you 're comparing like if you wanna run recognition on the pzm you would want you do n't want to downsample the wh that professor b: no actually i would think that you would get better you 'd get better high frequencies in the local mike . grad e: we 're gon na have plenty of low frequency on the p z ms with the fans . phd c: there was just one more thing i wanted to say which is unrelated to the recognition except that it 's related good news also i got chuck fillmore to record meetings but he had too many people in his meetings and that 's too bad cuz they 're very animated and but jerry also so we 're starting on phd c: but he has fewer he wo n't have more than eight and it 's a meeting on even deeper understanding , edu , so that sounds interesting . as a compliment to our front - end meeting phd c: and so that 's gon na start monday and one of the things that i was realizing is it would be really great if anyone has any ideas on some time synchronous way that people in the meeting can make a comment to the person whose gon na transcribe it or put a push a button when they wanna make a note about " boy you should probably erase those last few " or " i want this not to be recorded now " like that s phd c: cuz i was thinking if the person who sets up the meeting is n't there and it 's a group that we and this came up talking to jerry also that is there any way for them to indicate to make that the qu request that they have that they make explicitly get addressed somehow so i if anyone has ideas or you could even write down " it 's about three twenty five and " professor b: what i was just suggesting is we have these this cross pad just for this purpose professor b: i if this or if it 's a question for the mail to dan but is this thing of two eight channel boards a maximum for this setup or could we go to a third board ? grad e: i . i 'll send mail to dan and ask . that it 's the maximum we can do without a lot of effort because it 's one board with two digital channels . grad e: e eight each . so it takes two fibers in to the one board . and so w if we wanna do that more than that we 'd have to have two boards , and then you have the synchronization issue . professor b: but that 's a question because that would if it was possible cuz it is i already we have a group of people in this room that can not all be miked and it 's not just cuz we have n't been to the store , right it 's phd d: it just it 's eight channels come in , does it have do with the sampling rate ? grad e: it 's eight . i have no idea . but each fiber channel has eight channels and there are two ch two fibers that go in to the card . professor b: it might be a hard limitation , one thing is it the whole thing as i said is all structured in terms of forty - eight kilohertz sampling so that pushes requirements up a bit grad e: then we 'd also have to get another add and another mixer and all that . grad e: so i 'll send a mail to dan and ask him . ok on the are we done with that ? so the oth topic is getting more mikes and different mikes , so i got a quote we can fit we have room for one more wireless and the wireless , this unit here is three fifty three hundred fifty dollars , it i did n't realize but we also have to get a tuner the receiver the other end , that 's four thirty and then also grad e: so that 's something like seven hundred eighty bucks for one more of these . and then also it turns out that the connector that this thing uses is proprietary of sony believe it or not and sony only sells this headset . so if we wanna use a different set headset the solution that the guy suggested and they lots of people have done is sony will sell you the jack with just wires coming out the end and then you can buy a headset that has pigtail and solder it yourself . and that 's the other solution and so the jacks are forty bucks apiece and the he recommended a crown cm three eleven ae headset for two hundred bucks apiece . professor b: there is n't this some thing that plugs in , you actually have to go and do the soldering yourself ? grad e: becau - the reason is the only thing you can get that will plug into this is this mike or just the connector . professor b: no i understand . the reason i ask is these handmade wiring jobs fall apart in use so the other thing is to see if we can get them to do a custom job and put it together for this . grad e: so so my question is should we go ahead and get na nine identical head - mounted crown mikes ? professor b: because there 's no point in doing that if it 's not gon na be any better . grad e: so why do n't we get one of these with the crown with a different headset ? and and see if that works . professor b: and see if it 's preferable and if it is then we 'll get more . grad e: , and he said they do n't have any of these in stock but they have them in la and so it will take about a week to get here . so ok to just go order ? grad e: and who is the contact if i wanna do an invoice cuz that 's how we did it before . professor b: y right cuz so one is for the daisy chain so that 's fifteen instead of sixteen phd c: can i ask a really dumb question ? is is there any way we can have like a wireless microphone that you pass around to the people who the extra people for the times they wanna talk that professor b: that 's a good idea . that 's not a dumb question , it 's a good idea , phd c: but there might be a way to say that there are gon na be these different people professor b: no that no that 's a very if we ca n't get another board and even if we can i have a feeling they 'll be some work . professor b: let 's figure that we have eight which are set up and then there 's a ninth which is passed around to professor b: but hand many hand - helds are built wi with anti - shock things so that it is less susceptible to hand noises . if you hold the lapel mike i you just get all k sorts of junk . grad e: i wonder if they have one that will hook up to this or whether again we 'll have to wire it ourselves . phd d: , you would n't want it to hook there you 'd just want it to hook into the receiver in the other room , grad e: it 's gon na be much easier to get one of these and just plug in a mike , is n't it ? professor b: no no so right , so this is a good point , so you have these mikes with a little antenna on the end professor b: and then just have that as the and then you can have groups of twenty people or whatever phd c: because there 's only as andreas pointed out actually in the large the larger the group the less interaction the less people are talking over each other grad e: so i people who have to leave can leave and do we have anything else to discuss or should we just do digits ? postdoc g: of some extra a couple of extra things i 'd like to mention . one of them is to give you a status in terms of the transcriptions so far . so as of last night i 'd assigned twelve hours and they 'd finished nine and my goal was to have eleven done by the end of the month , that by tomorrow we 'll have ten . phd c: i j and this i got this email from jane at like two in the morning postdoc g: and then also an idea for another meeting , which would be to have the transcribers talk about the data it 's a little bit phd c: that 's a great idea cuz i 'd like to g have it recorded so that we can remember all the little things , phd d: so if we got them to talk about this meeting , it would be a meta meeting . postdoc g: that 's what i ' m thinking , have them talk about the data and they ' ve made observations to me postdoc g: like they say this meeting that we think has so much overlap , it does but there are other groups of similar size that have very little , it 's part of it 's the norm of the group and all that and they have various observations that would be fun , . professor b: so maybe we could they could have a meeting more or less without us that to do this and we should record it and then maybe one or two of them could come to one of these meetings and could tell us about it . phd c: it 's they will get to transcribe their own meeting but they also get paid for having a break postdoc g: and then i wanted to also say something about the fiscus john fiscus visit tomorrow . and which is to say that w it 'll be from nine to one that i ' m going to offer the organization allow him to adjust it if he wishes but to be in three parts , the acoustic part coming first which would be the room engineering aspects other things and he 'll be also presenting what nist is doing and then number two would be a the transcription process so this would be a focus on like presegmentation and the modifications to the multitrans interface which allows more refined encoding of the beginnings and ends of the overlapping segments which dave gelbart 's been doing and then and the presegmentation thilo 's been doing and then the third part would he has some that 's i relevant with respect to nist and then the third one would be focus on transcription standards so at nist he 's interested in this establishment of a global encoding standard i would say and i want it , k see what they 're doing and also present what we ' ve chosen as ours and discuss that thing . and so but he 's only here until one and actually we 're thinking of noon being lunch time so hoping that we can get as much of this done as possible before noon . and everybody who wants to attend is welcome . so postdoc g: here mostly but i ' ve also reserved the barco room to figure out how that works in terms of like maybe having a live demonstration . professor b: ok but the nine o ' cl nine o ' clock will be i be in here . , ok . grad e: should y we make him wear andreas ' mike or would that just be too confusing ? professor b: no i do n't think it 's confusing . , it does n't confuse me . grad j: ok , my name is espen eriksen . i ' m a norwegian . this is my second semester at berkeley . currently i ' m taking my first graduate level courses in dsp and when i come back to norway i ' m gon na continue with the more of a research project work . so this semester i ' m starting up with a small project through dave gelbart which i ' m taking a course with i got in touch with him and he told me about this project . so with the help of dan ellis i ' m gon na do small project associated to this . what i ' m gon na try to do is use ech echo cancellation to handle the periods where you have overlapping talk . to try to do something about that . so currently i ' m just reading up on echo cancellation , s looking into the theory behind that and then hopefully i get some results . so it 's a project goes over the course of one semester . grad j: so i ' m just here today to introduce myself . tell about i 'll be working on this . grad f: we were just talking about something like this yesterday or yesterday with liz . about doing some of the echo cancellation or possibly the spectroanalysis over the overlaps ,
the berkeley meeting recorder group discussed recognition results generated for 20 minutes of close-talking microphone data. recognition performance was very good , indicating promising results for forced alignment procedures and the ability to analyze other important signal information , e.g . prosody and overlapping speech. it was decided that close-talking data should be downsampled and fed to the sri recognizer to compare recognition performance , and that data from the far-field microphones should be tested on the recognizer as soon as possible. the group also discussed recording setup and equipment issues. a decision was made to purchase an additional head-mounted crown microphone. a tentative decision was also made to integrate the use of a hand-held wireless microphone to help compensate for the lack of available close-talking microphones. the collection of meeting recorder data is ongoing , and will include meetings by the berkeley even deeper understanding research group and , possibly , an organized discussion by members of the transcriber pool. following close-talking microphone recognition procedures , it was decided that data from the far-field microphones ( or pzms ) should be tested on the recognizer as soon as possible. speaker mn017 will compare close-talking microphone recognition results with those obtained for downsampled data. the sri recognizer will be set up at icsi to enable researchers to run their own variants. the group decided to purchase one additional head-mounted crown microphone. a tentative decision was also made to acquire a hand-held wireless microphone to pass around to additional meeting participants should the installation of more close-talking microphones prove too difficult. the suggestion of incorporating the use of cross pads during meeting recordings received favorable comments from participants. it was also tentatively decided to elicit meeting data from members of the transcriber pool discussing the meeting recorder corpus. during recognition procedures , a fixable acoustic glitch was discovered that causes speech channels to become slightly asynchronized. poorer recognition performance was yielded for speech recorded via the use of lapel microphones. recognition errors were largely due to misrecogniton of the plural -s and out-of-vocabulary items. the current recording setup is limited in that not all bmr meeting participants have their own close-talking microphone. a 'no-frills' recognizer , trained on switchboard acoustic models and both switchboard and broadcast news language models , was used to test data for one transcribed meeting. despite a glitch affecting the synchronized output of remaining data , recognition results for the first 20 minutes were very positive. the recognizer was successful in making accurate gender distinctions among speakers. it is anticipated that subsequent forced alignment procedures will also generate good results , enabling the group to analyze other types of signal information , such as prosody and overlapping speech segments. future work will involve getting assessments of forced aligned data from linguists on the accuracy of time marks. the collection of meeting recorder data is ongoing , and will include meetings by the berkeley even deeper understanding research group. transcriptions are also ongoing , with nearly 10 hours of meeting recorder data transcribed so far. future work will involve working out a system for getting subjects to approve transcriptions so that confidentiality agreements are upheld and data may be shared with other research groups. a visiting student from norway is working with a member of icsi to conduct a related project on echo cancellation for handling segments of overlapping speech. finally , the group is expecting a visit from a representative of nist.
###dialogue: grad e: ok , so for for people wearing the wireless mikes , like this one , i find the easiest way to wear it is sorta this sorta like that . grad e: it 's actually a lot more comfortable then if you try to put it over your temples , grad e: and then also , for all of them , if your boom is adjustable , the boom should be towards the corner of your mouth , grad e: and about a thumb to a thumb and a half distance away from your mouth , phd d: that 's interesting . so why did n't you get the same results and the unadapted ? grad e: everyone should have at least two forms possibly three in front of you depending on who you are . we 're doing a new speaker form and you only have to spea fill out the speaker form once but everyone does need to do it . and so that 's the name , sex , email , et cetera . grad e: we we had a lot of discussion about the variety of english and so on so if you what to put just leave it blank . i designed the form and i what to put for my own region , phd a: may i make one suggestion ? instead of age put date of year of birth because age will change , but the year of birth changes , stays the same , usually . postdoc g: course on the other hand you could you view it as the age at the time of the postdoc g: yes , but what we care about is the age at the recording date rather than the grad e: either way i think age is alright and then there will be attached to this a point or two these forms so that you 'll be able to extract the date off that grad e: so , anyway . and so then you also have a digits form which needs to be filled out every time , the speaker form only once , the digit form every time even if you do n't read the digits you have to fill out the digits form so that we know that you were at the meeting . ok ? and then also if you have n't filled one out already you do have to fill out a consent form . and that should just be one person whose name i . grad e: and i ' m liz and andreas wanna talk about recognition results . anything else ? phd c: , i sent out an email s couple hours ago so with andreas ' help andreas put together a no frills recognizer which is gender - dependent but like no adaptation , no cross - word models , no trigrams a bigram recognizer and that 's trained on switchboard which is telephone conversations . and to don 's help wh who don took the first meeting that jane had transcribed and separated used the individual channels we segmented it in into the segments that jane had used and don sampled that so eight k and then we ran up to i the first twenty minutes , up to synch time of one two zero so is that 's twenty minutes or so ? because i there 's some , phd c: and don can talk to jane about this , there 's some bug in the actual synch time file that i ' m we 're not where it came from but after that was a little messier . anyway so it 's twenty minutes and i actually phd a: but i looking at the first sentences looked much better than that and then suddenly it turned very bad and then we noticed that the reference was always one off with the it was actually recognized phd c: no actually it was i it was a complicated bug because they were sometimes one off and then sometimes random phd a: so so we have everything recognized but we scored only the first whatever , up to that time to postdoc g: we should say something about the glitch . he he can say something about the glitch . grad e: so the that problem has gone away in the original driver believe it or not when the ssh key gen ran the driver paused for a fraction of a second and so the channels get a little asynchronous and so if you listen to it in the middle there 's a little part where it starts doing click sounds . phd c: what happens is it actually affects the script that don if we know about it then i it could always be checked for it postdoc g: it it had no effect on my transcription , i had no trouble hearing it and having time bins grad e: i do remember seeing once the transcriber produce an incorrect xml file where one of the synch numbers was incorrect . phd c: and so then , you look at that and it 's got more than three significant digits in a synch time then that ca n't be right grad f: non - zero ? there are like more cuz there 's a lot of zeros i tacked on just because of the way the script ran , phd c: that would really be a problem , so anyway these are just the ones that are the prebug for one meeting . and what 's which phd c: this is really encouraging cuz this is free recognition , there 's no the language model for switchboard is different so you can see some like this trent lott which phd c: there 's a lot of perfect ones and good ones and all the references , you can read them and when we get more results you can look through and see phd a: there are a fair number of errors that are , where got the plural s wrong or the inflection on the verb wrong . grad e: , and who cares ? and and there were lots the " - s , " in on " - s " of " - s . phd c: there 's no those are actually a lot of the errors are out of vocabulary , so is it like pzm is three words , it 's pzm , there 's nothing there 's no language model for pzm or grad e: right . ri - ri right . did you say there 's no language for pzm ? grad e: do you mean so every time someone says pzm it 's an error ? maybe we should n't say pzm in these meetings . phd c: so , the bottom line is even though it 's not a huge amount of data it should be reasonable to actually run recognition and be like within the scope of r reasonable s switchboard this is like h about how we do on switchboard - two data with the switchboard - one trained mostly trained recognizer and switchboard - two is got a different population of speakers and a different topic phd c: and they 're talking about things in the news that happened after switchboard - one so there was @ so that 's great . professor b: so we 're in better shape than we were say when we did had the ninety - three workshop and we were all getting like seventy percent error on switchboard . phd c: especially i with jane that would help for since we have this new data now in order to go from the transcripts more easily to just the words that the recognizer would use for scoring . i had to deal with some of it by hand but a lot of it can be automated s by professor b: one thing i did n't get so the language model was straight from bigram from switchboard the acoustic models were also from switchboard or so they did n't have anything from this acoustic data in yet ? phd c: so that 's the on that 's the only acoustic training data that we have a lot of phd c: and i ramana , so a guy at sri said that there 's not a huge amount of difference going from phd c: it 's not like we probably lose a huge amount but we wo n't know because we do n't have any full band models for s conversational speech . phd d: it 's probably not as bad as going f using full band models on telephone band speech professor b: but for broadcast news when we played around between the two there was n't a huge loss . phd a: it 's also there 's actually more data is from broadcast news but with a little less weight because phd a: our complete system starts by doing ge a gender detection so just for the heck of it i ran that phd a: and it might be reassuring for everybody to know that it got all the genders right . phd c: and that 's interesting cuz the their language models are quite different and i ' m pretty from listening to eric that , given the words he was saying and given his pronunciation that the reason that he 's so much worse is the lapel . phd c: so it 's now if we can just eliminate the lapel one when we get new microphones professor b: cuz he certainly in that when as a burp user he was a pretty strong one . phd c: he sounded to me just from he sounded like a , what 's it a sheep or a goat ? phd c: right . sounded good . right so i the good news is that this is without a lot of the bells and whistles that we c can do with the sri system and we 'll have more data and we can also start to maybe adapt the language models once we have enough meetings . so this is only twenty minutes of one meeting with no tailoring . phd a: clearly there are with just a small amount of actual meeting transcriptions thrown into the language model you can probably do quite a bit better because the phd c: pzm and then there 's things like for the transcription i got when someone has a digit in the transcript i if they said , one or eleven and i if they said tcl or tcl . there 's things like that where , the we 'll probably have to ask the transcribers to indicate some of those kinds of things but in general it was really good and i ' m hoping and this is good news because that means the force alignments should be good and if the force alignments , it 's good news anyway but if the force alignments are good we can get all kinds of information . about , prosodic information and speaker overlaps and directly from the aligned times . so that 'll be something that actually in order to assess the forced alignment we need s some linguists or some people to look at it and say are these boundaries in about the right place . because it 's just gon na give us time marks phd c: so this would be like if you take the words and force align them on all the individual close talk close talking mikes then how good are these in reality and then i was thinking it grad e: so we might want to take twenty minutes and do a closer word level transcription . maybe actually mark the word boundaries . phd c: or i have someone look at the alignments maybe a linguist who can say roughly if these are ok and how far away they are . but it 's got ta be pretty good because otherwise the word recognition would be really b crummy . phd c: it would n't necessarily be the other way around , if the wor word recognition was crummy the alignment might be ok but if the word recognition is this good the alignment should be pretty good . so that 's about it . phd d: i wonder if this is a good thing or a bad thing though , if we 're pr grad e: do n't worry about it w d that 's the close talking mikes . try it on the p z ms and professor b: there 's still just the w the percentages and , they 're not a as we ' ve talked about before there 's probably overlaps professor b: there 's probably overlaps in fair number in switchboard as but there 's other phenomena , it 's a meeting , it 's a different thing and there 's lots of to learn with the close talking mikes certainly i 'd like to see as soon as we could , maybe get some of the glitches out of the way but soon as we could how it does with say with the p z ms or maybe even one of the and see if it 's , is it a hundred twenty percent or maybe it 's not maybe if with some adaptation you get this down to fifty percent or forty - five percent and then if for the pzm it 's seventy like that 's actually something we could work with a little bit phd c: no it 's really , this way we least have a baseline we know that the transcripts are very good so once you can get to the words that the recognizer which is a total subset of the things you need to understand the text they 're pretty good so and it 's converting automatically from the xml to the chopping up the wave forms and it 's not the case that the end of one utterance is in the next segment and things like that which we had more problems with in switchboard and let 's see there was one more thing i wanted to mention i ca n't remember . anyway it 's phd c: so andreas set up this recognizer and the recognizer all the files i ' m moving to sri and running everything there so i brought back just these result files and people can look at them phd a: we we talked about setting up the sri recognizer here . that 's if there are more machines here plus people can could run their own variants of the recognition runs certainly doable . postdoc g: i need t . i need to ask one question . which is so this issue of the legalistic aspects of the pre - sent pre - adapted , so what is the data that you take into sri , first question , you 're maintaining it in a place that would n't be publicly readable that , right ? grad e: it 's just the audio data itself , until people have a chance to edit it . phd c: so i can protect my directories through there . right now they 're not they 're in the speech group directories which so i will professor b: so we just have to go through this process of having people approve the transcriptions , say it 's ok . postdoc g: , we had to get them to approve and then i cuz the other question i was gon na ask is if we 're having it 's but this meeting that you have , no problem cuz i speak for myself postdoc g: but that we did n't do anything that but anyway so i would n't be too concerned about it with respect to that although we should clear it with eric and dan but these results are based on data which have n't had the chance to be reviewed by the subjects postdoc g: and i how that stands , if you get fantastic results and it 's involving data which later end up being lessened by , certain elisions , then i but i wanted to raise that issue , professor b: once we get all this streamlined it may be sh it hopefully it will be fairly quick but we get the transcriptions , people approve them and so on it 's just that we 're grad e: alright we need to work at a system for doing that approval so that we can send people the transcripts and get back any bleeps that they want professor b: it 's gon na be a rare thing that there 's a bleep for the most part . phd a: u actually i had a question about the downsampling , i who , how this was done but is there are there any issues with downsampling phd a: because i know that the recognizer that we use h can do it on the fly so we would n't have to have it do it explicitly beforehand . and is there any i are there other d sev is there more than one way to do the downsampling where one might be better than another ? grad f: there are lots of w there are lots of ways to do the downsampling different filters to put on , like anti - aliasing . grad e: i do n't think we even know which one i assume you 're using syncat to do it ? phd a: so so the other thing we should try is to just take the original wave forms , phd a: and and feed them to the sri recognizer and see if the sri front - end does something . phd c: they 're just bigger to transfer , that 's why i s downsampled them before but grad f: although those eighty meg files take a while to copy into my directories so , but no , it 's not i it would n't be a problem if you 're interested in it phd a: and and if for some reason we see that it works better then we might investigate why phd c: so we could try that with this particular twenty minutes of speech and see if there 's any differences . phd a: a at some point someone might have optimized whatever filtering is done for the actual recognition performance . so in other words right , grad e: it just seems to me that , small changes to the language model and the vocabulary will so swamp that it may be premature to worry about that . so one is a half a percent better than the other i do n't think that gives you any information . phd c: it 's just as easy to give you the sixteen k individual , it was just more disk space for storing them professor b: plp uses auto - regressive filtering and modeling and so it can be sensitive to the filtering that you 're doing but mel cepstrum might not b you would n't expect to be so much phd c: we can try it if you generate like the same set of files just up to that point where we stopped anyway and just sti stick them somewhere phd a: do n't stop . do n't stop at that part because we 're actually using the entire conversation to estimate the speaker parameters , grad f: , i 'll i have to do is e the reference file would stay the same , it 's just the individual segments would be approximately twice as long and i could just replace them with the bigger ones in the directory , phd c: i hand - edited the whole meeting so that can be run it 's just once we get the bug out . postdoc g: one one question which is i had the impression from this meeting that w that i transcribed that there was already automatic downsampling occurring , is that in order to so it was so it 's like there 's already down professor b: this is being recorded at forty - eight kilohertz . which is more that anybody needs phd c: and that 's actually said in your meeting , that 's how i know that . professor b: it 's a digital audio orientation for the board it 's in the monitor so it 's professor b: so it 's just that they were operating from switchboard which was a completely telephone database phd c: and i if you 're comparing like if you wanna run recognition on the pzm you would want you do n't want to downsample the wh that professor b: no actually i would think that you would get better you 'd get better high frequencies in the local mike . grad e: we 're gon na have plenty of low frequency on the p z ms with the fans . phd c: there was just one more thing i wanted to say which is unrelated to the recognition except that it 's related good news also i got chuck fillmore to record meetings but he had too many people in his meetings and that 's too bad cuz they 're very animated and but jerry also so we 're starting on phd c: but he has fewer he wo n't have more than eight and it 's a meeting on even deeper understanding , edu , so that sounds interesting . as a compliment to our front - end meeting phd c: and so that 's gon na start monday and one of the things that i was realizing is it would be really great if anyone has any ideas on some time synchronous way that people in the meeting can make a comment to the person whose gon na transcribe it or put a push a button when they wanna make a note about " boy you should probably erase those last few " or " i want this not to be recorded now " like that s phd c: cuz i was thinking if the person who sets up the meeting is n't there and it 's a group that we and this came up talking to jerry also that is there any way for them to indicate to make that the qu request that they have that they make explicitly get addressed somehow so i if anyone has ideas or you could even write down " it 's about three twenty five and " professor b: what i was just suggesting is we have these this cross pad just for this purpose professor b: i if this or if it 's a question for the mail to dan but is this thing of two eight channel boards a maximum for this setup or could we go to a third board ? grad e: i . i 'll send mail to dan and ask . that it 's the maximum we can do without a lot of effort because it 's one board with two digital channels . grad e: e eight each . so it takes two fibers in to the one board . and so w if we wanna do that more than that we 'd have to have two boards , and then you have the synchronization issue . professor b: but that 's a question because that would if it was possible cuz it is i already we have a group of people in this room that can not all be miked and it 's not just cuz we have n't been to the store , right it 's phd d: it just it 's eight channels come in , does it have do with the sampling rate ? grad e: it 's eight . i have no idea . but each fiber channel has eight channels and there are two ch two fibers that go in to the card . professor b: it might be a hard limitation , one thing is it the whole thing as i said is all structured in terms of forty - eight kilohertz sampling so that pushes requirements up a bit grad e: then we 'd also have to get another add and another mixer and all that . grad e: so i 'll send a mail to dan and ask him . ok on the are we done with that ? so the oth topic is getting more mikes and different mikes , so i got a quote we can fit we have room for one more wireless and the wireless , this unit here is three fifty three hundred fifty dollars , it i did n't realize but we also have to get a tuner the receiver the other end , that 's four thirty and then also grad e: so that 's something like seven hundred eighty bucks for one more of these . and then also it turns out that the connector that this thing uses is proprietary of sony believe it or not and sony only sells this headset . so if we wanna use a different set headset the solution that the guy suggested and they lots of people have done is sony will sell you the jack with just wires coming out the end and then you can buy a headset that has pigtail and solder it yourself . and that 's the other solution and so the jacks are forty bucks apiece and the he recommended a crown cm three eleven ae headset for two hundred bucks apiece . professor b: there is n't this some thing that plugs in , you actually have to go and do the soldering yourself ? grad e: becau - the reason is the only thing you can get that will plug into this is this mike or just the connector . professor b: no i understand . the reason i ask is these handmade wiring jobs fall apart in use so the other thing is to see if we can get them to do a custom job and put it together for this . grad e: so so my question is should we go ahead and get na nine identical head - mounted crown mikes ? professor b: because there 's no point in doing that if it 's not gon na be any better . grad e: so why do n't we get one of these with the crown with a different headset ? and and see if that works . professor b: and see if it 's preferable and if it is then we 'll get more . grad e: , and he said they do n't have any of these in stock but they have them in la and so it will take about a week to get here . so ok to just go order ? grad e: and who is the contact if i wanna do an invoice cuz that 's how we did it before . professor b: y right cuz so one is for the daisy chain so that 's fifteen instead of sixteen phd c: can i ask a really dumb question ? is is there any way we can have like a wireless microphone that you pass around to the people who the extra people for the times they wanna talk that professor b: that 's a good idea . that 's not a dumb question , it 's a good idea , phd c: but there might be a way to say that there are gon na be these different people professor b: no that no that 's a very if we ca n't get another board and even if we can i have a feeling they 'll be some work . professor b: let 's figure that we have eight which are set up and then there 's a ninth which is passed around to professor b: but hand many hand - helds are built wi with anti - shock things so that it is less susceptible to hand noises . if you hold the lapel mike i you just get all k sorts of junk . grad e: i wonder if they have one that will hook up to this or whether again we 'll have to wire it ourselves . phd d: , you would n't want it to hook there you 'd just want it to hook into the receiver in the other room , grad e: it 's gon na be much easier to get one of these and just plug in a mike , is n't it ? professor b: no no so right , so this is a good point , so you have these mikes with a little antenna on the end professor b: and then just have that as the and then you can have groups of twenty people or whatever phd c: because there 's only as andreas pointed out actually in the large the larger the group the less interaction the less people are talking over each other grad e: so i people who have to leave can leave and do we have anything else to discuss or should we just do digits ? postdoc g: of some extra a couple of extra things i 'd like to mention . one of them is to give you a status in terms of the transcriptions so far . so as of last night i 'd assigned twelve hours and they 'd finished nine and my goal was to have eleven done by the end of the month , that by tomorrow we 'll have ten . phd c: i j and this i got this email from jane at like two in the morning postdoc g: and then also an idea for another meeting , which would be to have the transcribers talk about the data it 's a little bit phd c: that 's a great idea cuz i 'd like to g have it recorded so that we can remember all the little things , phd d: so if we got them to talk about this meeting , it would be a meta meeting . postdoc g: that 's what i ' m thinking , have them talk about the data and they ' ve made observations to me postdoc g: like they say this meeting that we think has so much overlap , it does but there are other groups of similar size that have very little , it 's part of it 's the norm of the group and all that and they have various observations that would be fun , . professor b: so maybe we could they could have a meeting more or less without us that to do this and we should record it and then maybe one or two of them could come to one of these meetings and could tell us about it . phd c: it 's they will get to transcribe their own meeting but they also get paid for having a break postdoc g: and then i wanted to also say something about the fiscus john fiscus visit tomorrow . and which is to say that w it 'll be from nine to one that i ' m going to offer the organization allow him to adjust it if he wishes but to be in three parts , the acoustic part coming first which would be the room engineering aspects other things and he 'll be also presenting what nist is doing and then number two would be a the transcription process so this would be a focus on like presegmentation and the modifications to the multitrans interface which allows more refined encoding of the beginnings and ends of the overlapping segments which dave gelbart 's been doing and then and the presegmentation thilo 's been doing and then the third part would he has some that 's i relevant with respect to nist and then the third one would be focus on transcription standards so at nist he 's interested in this establishment of a global encoding standard i would say and i want it , k see what they 're doing and also present what we ' ve chosen as ours and discuss that thing . and so but he 's only here until one and actually we 're thinking of noon being lunch time so hoping that we can get as much of this done as possible before noon . and everybody who wants to attend is welcome . so postdoc g: here mostly but i ' ve also reserved the barco room to figure out how that works in terms of like maybe having a live demonstration . professor b: ok but the nine o ' cl nine o ' clock will be i be in here . , ok . grad e: should y we make him wear andreas ' mike or would that just be too confusing ? professor b: no i do n't think it 's confusing . , it does n't confuse me . grad j: ok , my name is espen eriksen . i ' m a norwegian . this is my second semester at berkeley . currently i ' m taking my first graduate level courses in dsp and when i come back to norway i ' m gon na continue with the more of a research project work . so this semester i ' m starting up with a small project through dave gelbart which i ' m taking a course with i got in touch with him and he told me about this project . so with the help of dan ellis i ' m gon na do small project associated to this . what i ' m gon na try to do is use ech echo cancellation to handle the periods where you have overlapping talk . to try to do something about that . so currently i ' m just reading up on echo cancellation , s looking into the theory behind that and then hopefully i get some results . so it 's a project goes over the course of one semester . grad j: so i ' m just here today to introduce myself . tell about i 'll be working on this . grad f: we were just talking about something like this yesterday or yesterday with liz . about doing some of the echo cancellation or possibly the spectroanalysis over the overlaps , ###summary: the berkeley meeting recorder group discussed recognition results generated for 20 minutes of close-talking microphone data. recognition performance was very good , indicating promising results for forced alignment procedures and the ability to analyze other important signal information , e.g . prosody and overlapping speech. it was decided that close-talking data should be downsampled and fed to the sri recognizer to compare recognition performance , and that data from the far-field microphones should be tested on the recognizer as soon as possible. the group also discussed recording setup and equipment issues. a decision was made to purchase an additional head-mounted crown microphone. a tentative decision was also made to integrate the use of a hand-held wireless microphone to help compensate for the lack of available close-talking microphones. the collection of meeting recorder data is ongoing , and will include meetings by the berkeley even deeper understanding research group and , possibly , an organized discussion by members of the transcriber pool. following close-talking microphone recognition procedures , it was decided that data from the far-field microphones ( or pzms ) should be tested on the recognizer as soon as possible. speaker mn017 will compare close-talking microphone recognition results with those obtained for downsampled data. the sri recognizer will be set up at icsi to enable researchers to run their own variants. the group decided to purchase one additional head-mounted crown microphone. a tentative decision was also made to acquire a hand-held wireless microphone to pass around to additional meeting participants should the installation of more close-talking microphones prove too difficult. the suggestion of incorporating the use of cross pads during meeting recordings received favorable comments from participants. it was also tentatively decided to elicit meeting data from members of the transcriber pool discussing the meeting recorder corpus. during recognition procedures , a fixable acoustic glitch was discovered that causes speech channels to become slightly asynchronized. poorer recognition performance was yielded for speech recorded via the use of lapel microphones. recognition errors were largely due to misrecogniton of the plural -s and out-of-vocabulary items. the current recording setup is limited in that not all bmr meeting participants have their own close-talking microphone. a 'no-frills' recognizer , trained on switchboard acoustic models and both switchboard and broadcast news language models , was used to test data for one transcribed meeting. despite a glitch affecting the synchronized output of remaining data , recognition results for the first 20 minutes were very positive. the recognizer was successful in making accurate gender distinctions among speakers. it is anticipated that subsequent forced alignment procedures will also generate good results , enabling the group to analyze other types of signal information , such as prosody and overlapping speech segments. future work will involve getting assessments of forced aligned data from linguists on the accuracy of time marks. the collection of meeting recorder data is ongoing , and will include meetings by the berkeley even deeper understanding research group. transcriptions are also ongoing , with nearly 10 hours of meeting recorder data transcribed so far. future work will involve working out a system for getting subjects to approve transcriptions so that confidentiality agreements are upheld and data may be shared with other research groups. a visiting student from norway is working with a member of icsi to conduct a related project on echo cancellation for handling segments of overlapping speech. finally , the group is expecting a visit from a representative of nist.
13
phd g: ok . your channel number 's already on this blank sheet . so you just if you can phd g: but if you think it 's . it 's a default . but set it higher if you like . professor f: it 's not showing much . test , test , test , test . ok , that seems better ? ok , good . , that 's good . that 's ahh . mmm . so i had a question for adam . have we started already ? professor f: . great idea . i was gon na ask adam to , say if he thought anymore about the demo because it occurred to me that this is late may and the darpa meeting is in mid july . , but i do n't remember w what we i know that we were gon na do something with the transcriber interface is one thing , but there was a second thing . anybody remember ? phd g: , we were gon na do a mock - up , like , question answering , that was separate from the interface . do you remember ? remember , like , asking questions and retrieving , but in a pre - stored fashion . professor f: alright . so anyway , you have to sort out that out and get somebody going on it cuz we 're got a month left . so . professor f: ok . so , what are we g else we got ? you got you just wrote a bunch of . phd g: no . that was all , previously here . i was writing the digits and then i realized i could xerox them , phd g: because i did n't want people to turn their heads from these microphones . so . we all , have the same digit form , for the record . professor f: that 's . so , the choice is , which do we want more , the comparison , of everybody saying them at the same time or the comparison of people saying the same digits at different times that ? phd g: , it actually it might be good to have them separately and have the same exact strings . , we could use them for normalizing , but it goes more quickly doing them in unison . phd e: but anyway , they wo n't be identical as somebody is saying zero in some sometimes , saying o , and so , it 's not i not identical . professor f: really boring chorus . do we have an agenda ? adam usually tries to put those together , but he 's ill . professor f: is there that 's happened about , , the sri recognizer et cetera , tho those things that were happening before with ? y y you guys were doing a bunch of experiments with different front - ends and then with is is that still where it was , the other day ? phd d: now the you saw the note that the plp now is getting the same as the mfcc . right ? phd c: . actually it looks like it 's getting better . so . but but it 's not phd c: but , that 's not d directly related to me . does n't mean we ca n't talk about it . , it seems it looks l i have n't the it 's the experiment is still not complete , but , it looks like the vocal tract length normalization is working beautifully , actually , w using the warp factors that we computed for the sri system and just applying them to the icsi front - end . phd c: just had to take the reciprocal of the number because they have different meanings in the two systems . phd c: but one issue actually that just came up in discussion with liz and don was , as far as meeting recognition is concerned , we would really like to , move , to , doing the recognition on automatic segmentations . because in all our previous experiments , we had the , we were essentially cheating by having the , , the h the hand - segmentations as the basis of the recognition . and so now with thilo 's segmenter working so , we should consider doing a phd e: that - that 's what i wanted to do anyway , so we should just get together and phd g: and even the good thing is that since you , have high recall , even if you have low precision cuz you 're over - generating , that 's good because we could train noise models in the recognizer for these kinds of , transients and things that come from the microphones , phd g: but i know that if we run recognition unconstrained on a whole waveform , we do very poorly because we 're getting insertions in places what that you may be cutting out . so we do need some pre - segmentation . phd c: we should we should consider doing some extra things , like , , retraining or adapting the models for background noise to the to this environment , . phd g: right now they 're discrete , yes or no for a speaker , to consider those particular speaker background models . there 's lots of ins interesting things that could be done . phd d: . so , talked with brian and gave him the alternatives to the single beep at the end of each utterance that we had generated before . phd d: the chuck chunks . right . and so he talked it over with the transcriber and the transcriber thought that the easiest thing for them would be if there was a beep and then the nu a number , a digit , and then a beep , at the beginning of each one and that would help keep them from getting lost . and , so adam wrote a little script to generate those style , beeps phd d: and so we 're i came up here and just recorded the numbers one through ten . phd d: . he then he d i recorded actually , i recorded one through ten three times at three different speeds and then he picked . phd d: he liked the fastest one , so he just cut those out and spliced them in between , two beeps . phd e: it will be funny when you 're really reading digits , and then there are the chunks with your digits in ? phd d: . ! maybe . and she said it was n't gon na the transcriber said it would n't be a problem cuz they can actually make a template , that has beep , number , beep . so for them it 'll be very quick to put those in there when they 're transcribing . so , we we 're gon na send them one more sample meeting , and thilo has run his segmentation . adam 's gon na generate the chunked file . and then , i 'll give it to brian and they can try that out . and when we get that back we 'll see if that fixes the problem we had with , too many beeps in the last transcription . professor f: ok . do w do what do you have any idea of the turn - around on those steps you just said ? phd d: , i . the last one seemed like it took a couple of weeks . , maybe even three . , that 's just the i b m side . our side is quick . , i . how long does your ? professor f: , i meant the overall thing . e u the reason i ' m asking is because , jane and i have just been talking , and she 's just been doing . , e a , further hiring of transcribers . professor f: and so we do n't really know exactly what they 'll be doing , how long they 'll be doing it , and , because right now she has no choice but to operate in the mode that we already have working . and , so it 'd be it 'd be good to get that resolved , soon as we could , phd d: i , i hope @ we can get a better estimate from this one that we send them . professor f: in particular i would really hope that when we do this darpa meeting in july that we have we 're into production mode , somehow , that we actually have a stream going and we know how it does and how it operates . that would certainly be a very good thing to know . professor f: maybe before we do the meeting info organize thing , maybe you could say relevant about where we are in transcriptions . postdoc a: ok . so , we , the transcribers have continued to work past what i ' m calling " set one " , which was the s the set that i ' ve been , ok , talking about up to this point , but , they ' ve gotten five meetings done in that set . right now they 're in the process of being edited . , the , let 's see , i hired two transcribers today . i ' m thinking of hiring another one , which will because we ' ve had a lot of attrition . and that will bring our total to postdoc a: so , one of them had a baby . , one of them really w was n't planning postdoc a: , one of them , had never planned to work past january . , it 's th all these various things , cuz we , we presented it as possibly a month project back in january and , so it makes sense . , through attrition we ' ve we 're down to two , but they 're really solid . we 're really lucky the two that we kept . and , i do n't mean anything against the others . what is we ' ve got a good core . no . we had a good core phd g: , they wo n't hear this since they 're going . they wo n't be transcribing this meeting . postdoc a: , but still . , i d it 's just a matter of we w we 're we ' ve got , postdoc a: two of the ones who , ha had been putting in a lot of hours up to this point and they 're continuing to put in a lot of hours , which is wonderful , and excellent work . and so , then , in addition , i hired two more today and i ' m planning to h hire a third one with this within this coming week , but the plan is just as , morgan was saying we discussed this , and the plan right now is to keep the staff on the leaner side , rather than hiring , like , eight to ten right now , because if the ibm thing comes through really quickly , then , we would n't wanna have to , , lay people off and . so . and this way it 'll , i got really a lot of response for my notice and i could hire additional people if i wish to . professor f: . an - and the other thing is , in the unlikely event and since we 're so far from this , it 's a little hard to plan this way in the unlikely event that we actually find that we have , transcribers on staff who are twiddling their thumbs because , there 's , all the that was sitting there has been transcribed and they 're faster the pipeline is faster than , than the generation , i in the day e event that day actually dawns , i bet we could find some other for them to do . professor f: so that , a as we were talking , if we hire twelve , then we could , run into a problem later . , we also just could n't sustain that forever . but but , for all sorts of reasons but if we hire f , f we have five on staff five or six on staff at any given time , then it 's a small enough number so we can be flexible either way . phd g: it 'd be great , too , if , we can we might need some help again getting the tighter boundaries or some hand to experiment with , to have a ground truth for this segmentation work , which i you have some already that was really helpful , and we could probably use more . phd e: mmm , . that was a thing i planned working on , is , to use the transcriptions which are done by now , and to use them as , phd e: and to use them for training a or for fo whatever . to to create some speech - nonspeech labels out of them , and , but that 's a thing w was w what i ' m just looking into . postdoc a: the the pre - segmentations are so much are s so extremely helpful . now there was , i g so , a couple weeks ago i needed some new ones and it happened to be during the time that he was on vacation f for just very few days you were away . but it happened to be during that time i needed one , so i started them on the non - pre - segmented and then switched them over to yours and , they always appreciate that when they have that available . and he 's , usually , , . so they really appreciate it . but i was gon na say that they do adjust it once in a while . , once in a while there 's something like , postdoc a: , and e actually you talked to them . did n't you ? did you ? have you ? postdoc a: and and she was and so , i asked her , they 're very perceptive . i really want to have this meeting of the transcribers . i have n't done it yet , but i wanna do that and she 's out of town , for a couple of weeks , but i wanna do that when she returns . , cuz she was saying , in a span of very short period we asked it seems like the ones that need to be adjusted are these things , and she was saying the short utterances , the , postdoc a: , you 're you 're aware of this . but but actually i it 's so correct for so much of the time , that it 's an enormous time saver and it just gets tweaked a little around the boundaries . phd g: is there actually a record of where they change ? , you can compare , do a diff on the just so that we knew postdoc a: you could do it . it 's it 's complicated in that , hhh , i phd e: actually , when they create new , new segments , it will be , not that easy but . one could do that . phd g: , if we keep a old copy of the old time marks just so that if we run it we know whether we 're which ones were cheating postdoc a: there is a there is one problem with that and that is when they start part way through then what i do is i merge what they ' ve done with the pre - segmented version . postdoc a: so it 's not a pure condition . wha - what you 'd really like is that they started with pre - segmented and were pre - segmented all the way through . postdoc a: and , @ i , the it was n't possible for about four of the recent ones . but , it will be possible in the future because we 're , . phd g: mmm , that 's great . as long as we have a record , i , of the original automatic one , we can always find out how we would do fr from the recognition side by using those boundaries . , a completely non - cheating version . also if you need someone to record this meeting , i ' m happy to for the transcribers i could do it , or chuck or adam . phd d: . so , , jane and adam and i had a meeting where we talked about the reorganization of the directory structure for all of the meeting phd d: no . for all the meeting recorder data . we should have . and so we ' ve got a plan for what we 're gon na do there . and then , jane also s prepared a , started getting all of the meetings organized , so she prepared a spreadsheet , which i spent the last couple of days adding to . so i went through all of the data that we have collected so far , and have been putting it into , a spreadsheet with start time , the date , the old meeting name , the new meeting name , the number of speakers , the duration of the meeting , comments , what its transcription status is , all that . and so , the idea is that we can take this and then export it as html and put it on the meeting recorder web page so we can keep people updated about what 's going on . phd d: , i ' ve got ta get some more information from jane cuz i have some gaps here that i need to get her to fill in , but so far , as of monday , the fourteenth , we ' ve had a total number of meeting sixty - two hours of meetings that we have collected . and , some other interesting things , average number of speakers per meeting is six . , and i ' m gon na have on here the total amount that 's been transcribed so far , but i ' ve got a bunch of , that 's what i have to talk to jane about , figuring out exactly which ones have been completed and . but , this 'll be a thing that we can put up on the web site and people can be informed of the status of various different ones . and it 'll also list , like under the status , if it 's at ibm or if it 's at icsi , or if it 's completed or which ones we 're excluding and there 's a place for comments , so we can , say why we 're excluding things and . professor f: now would the ones that , are already transcribed we h we have enough there that c , we ' ve already done some studies and , should n't we go through and do the business - es u of having the , , participants approve it , for approve the transcriptions for distribution and ? postdoc a: , interesting idea . in principle , i would say yes , although i still am doing some the final - pass editing , trying to convert it over to the master file as the being the channelized version and it 's , it seems like i get into that a certain way and then something else intervenes and i have to stop . cleaning up the things like the , places where the transcriber was uncertain , and doing spot - checking here and there . so , , i it would make sense to until th that 's done , but professor f: , le let me put in another a milestone as i did with the , the pipeline . , we are gon na have this darpa meeting in the middle of july , professor f: and it w it 'd be given that we ' ve been we ' ve given a couple public talks about it already , spaced by months and months , it 'd be pretty bad if we continued to say none of this is available . professor f: right . so we can s we wanna be able to say " here is a subset that is available right now " phd c: and they do n't have to approve , th an edited version , they can just give their approval to whatever version professor f: , in principle , yes . but , i if somebody actually did get into some legal issue with it then we phd c: bu . but th , the editing will continue . presumably if s errors are found , they will be fixed , but they wo n't change the content of the meetings . phd g: , i if jane is clarifying question , then , how can they agree to it before they know her final version ? postdoc a: the other thing , too , is there can be subtleties where a person uses this word instead of that word , which @ could ' ve been transcribed in the other way . postdoc a: and no and they would n't have been slanderous if it had been this other word . ? professor f: i it , there is a point at which i agree it becomes ridiculous because , you could do this final thing and then a year from now somebody could say , that should be a period and not a question mark . right ? and you do n't you there 's no way that we 're gon na go back and ask everybody " do you approve this , this document now ? " so what it is that the thing that they sign i have n't looked at it in a while , but it has to be open enough that it says " ok , from now on , now that i ' ve read this , you can use do anything you want with these data . " and , but , i we wanna so , assuming that it 's in that wording , which i do n't remember , i we just wanna have enough confidence ourselves that it 's so close to the final form it 's gon na be in , a year from now that they 're postdoc a: i agree . mmm . i agree . it 's just , a question of , if the person is using the transcript as the way of them judging what they said and whether it was slanderous , then it seems like it 's i it needs to be more correct than if we could count on them re - listening to the meeting . because it becomes , in a way a f , a legal document i if they ' ve to that . professor f: , i forget how we end right . i forget how we ended up on this , but i remember my taking the position of not making it so easy for everybody to observe everything and adam was taking the position of having it be really straightforward for people to check every aspect of it including the audio . and i do n't remember who won , adam or me , but postdoc a: , if it 's only the transcript , though , th this is my point , that professor f: , the , that 's why i ' m bringing this up again , because i ca n't remember how we ended up . professor f: that it was the transcrip he wanted to do a web interface that would make it professor f: that would give you access to the transcript and the audio . that 's what adam wanted . and i do n't remember how we ended up . phd g: , with the web interface it 's interesting , because you could allow the person who signs to be informed when their transcript changes , like that . and , i would say " no " . like , i do n't wanna know , but some people might be really interested and then y in other words , they would be informed if there was some significant change other than typos and things like that . phd g: , i what happened to the small heads thing , but i j , i ' m just saying that , like , you can say that any things that are deemed phd g: anyway . , i agree that at some point people probably wo n't care about typos but they would care about significant meaning changes and then they could be asked for their consent , i , if those change . cuz assumi assuming we do n't really distribute things that have any significant changes from what they sign anyway . phd c: we just have to give them a chance to listen to it , and if they do n't , that 's their problem . postdoc a: unfortunately , in the sign thing that they signed , it says " transcripts " . phd g: i that 's a lot to ask for people that have been in a lot of meetings . professor f: w anyway , have n't we ' ve gone down this path a number of times . i know this can lead to extended conversations and not really get anywhere , so let me just suggest that , off - line that , the people involved figure it out and take care of it before it 's july . ok . so so that in july we can tell people " yes , we have this and you can use it " . professor f: so , let 's see . what else we got ? . don did a report about his project in class and , an oral and written version . so that was he was doing with you . phd g: , it 's i one thing we 're learning is that the amount we have eight meetings there because we could n't use the non - native all non - native meetings and it 's , probably below threshold on enough data for us for the things we 're looking at because the prosodic features are very noisy and so you need a lot of data in order to model them . , so we 're starting to see some patterns and we 're hoping that maybe with , i , double or triple the data with twenty meetings or so , that we would start to get better results . but we did find that some of the features that , i gue jane would know about , that are expressing the distance of , boundaries from peaks in the utterance and some local , range pitch range effects , like how close people are to their floor , are showing up in these classifiers , which are also being given some word features that are cheating , cuz they 're true words . , so these are based on forced alignment . word features like , word frequency and whether or not something 's a backchannel and . so , we 're starting to see , some interesting patterns . professor f: so the dominant features , including everything , were those quasi - cheating things . where these are grad b: . sometimes positions in sentences , or in spurts , was helpful . i if that 's cheating , too . phd g: but roughly speaking , the recognized words are gon na give you a similar type of position . phd g: y it should be . , we and actually that 's one of the things we 're interested in doing , is a grad b: ti just p time position , like when the word starts ? i if that was in the phd g: and it depends on speaking rate . . that 's actually why i did n't use it at first . but we one of the interesting things was i you reported on some te punctuation type finding sentence boundaries , finding disfluency boundaries , and then i had done some work on finding from the foreground speech whether or not someone was likely to interrupt , so where , if i ' m talking now and someone and andreas is about to interrupt me , is he gon na choose a certain place in my speech , either prosodically or word - based . and there the prosodic features actually showed up and a neat thing even though the word features were available . and a neat thing there too is i tried some putting the speaker so , i gave everybody a short version of their name . so the real names are in there , which we could n't use . , we should use i ds . and those do n't show up . so that means that overall , it was n't just modeling morgan , or it was n't just modeling a single person , but was trying to , get a general idea the model the tree classifier was trying to find general locations that were applicable to different speakers , even though there are huge speaker effects . the but the main limitation now is i because we 're only looking at things that happen every ten words or every twenty words , we need more data and more data per speaker . it 'd also be interesting to look at the edu meetings because we did include meeting type as a feature , so whether you were in a r meeting recorder meeting or a robustness meeting did matter to interrupts because there are just fewer interrupts in the robustness meetings . phd g: and so the classifier learns more about morgan than it does about the average person , which is not bad . it 'd probably do better than , but it was n't generalizing . so it 's and don , we have a long list of things he 's starting to look at now over the summer , where we can and he 'll be able to report on more things in the future . but it was great that we could at least go from the , jane 's transcripts and the , recognizer output and get it to this point . and it 's something mari can probably use in her preliminary report like , " , we 're at the point where we 're training these classifiers and we 're just reporting very preliminary but suggestive results that some features , both word and pro prosodic , work . " the other thing that was interesting to me is that the pitch features are better than in switchboard . and that really is from the close - talking mikes , cuz the pitch processing that was done has much cleaner behavior than the switchboard telephone bandwidth . phd g: . , first of all , the pitch tracks are m have less , halvings and doublings than switchboard and there 's a lot less dropout , so if you ask how many regions where you would normally expect some vowels to be occurring are completely devoid of pitch information , in other words the pitch tracker just did n't get a high enough probability of voicing for words for , five word there are much fewer than in switchboard . so the missing we had a big missing data problem in switchboard and , so the features were n't as reliable cuz they were often just not available . phd d: could it have to do with the lower frequency cut - off on the switchboard ? phd g: so that 's actually good . ma - maybe . , the tele we had telephone bandwidth for switchboard and we had the an annoying telephone handset movement problem that may also affect it . so we 're just getting better signals in this data . which is . anyway , don 's been doing a great job and we hope to continue with , andreas 's help and also some of thilo 's help on this , phd g: to try to get a non - cheating version of how all this would work . professor f: has has , ? we just , just talked about this the other day , but h has anybody had a chance to try changing , insertion penalty things with the , using the tandem system input for the ? phd c: there were a little the relative number of there were a higher number of deletions , actually . so , you , so , actually it preferred to have a positive er , negative insertion penalty , phd c: which means that , but , it did n't change th the by adjusting that the , . the error changed by probably one percent or so . but , given that word error rate is so high , that 's not a phd c: that 's not the problem . no . but , we s just , , chuck and i talked and the @ next thing to do is probably to tune the , the size of the gaussian system , @ to this feature vector , which we have n't done . we just used the same configuration as we used for the standard system . and , , dan @ dan just sent me a message saying that cmu used , something like ten gaussians per cluster , each mixture has ten gaussians phd c: so that 's a big difference and it might be way off and give very poorly trained , , gaussians that way , an and poorly trained mixture weights . so so , we have the turn - around time on the training when we train only the a male system with , , our small training set , is less than twenty - four hours , so we can run lots of , just brute force , try a whole bunch of different , settings . and , with the new machines it 'll be even better . phd c: but the plp features work , , continue to improve the , as i said before , the using dan 's , vocal tract normalization option works very . so , @ i ran one experiment where we 're just did the vocal tract le normalization only in the test data , so i did n't bother to retrain the models , and it improved by one percent , which is about what we get with , just @ actually doing both training and test normalization , with , the , with the standard system . so , in a few hours we 'll have the numbers for the for retraining everything with vocal tract length normalization and so , that might even improve it further . so , it looks like the p l - fea p features do very now with after having figured out all these little tricks to get it to work . phd g: . so you mean you improve one percent over a system that does n't have any v t l in it already ? professor f: ok . so then we 'll have our baseline to compare the currently hideous , new thing with . phd c: right . a right . and and what that suggests also is that the current switchboard mlp is n't trained on very good features . phd c: , because it was trained on whatever , was used , last time you did hub - five , which did n't have any of the professor f: right . but all of these effects were j like a couple percent . , y the phd c: , but if you add them all up you have , almost five percent difference now . professor f: add all of them . one was one point five percent and one was point eight . phd c: , actually , and it 's , what 's actually qu interesting is that with , you m prob maybe another half percent if you do the vtl in training , and then interestingly , if you optimize you get more of a win out of rescoring the , , the n best lists , and optimizing the weights , than professor f: . but the part that 's actually adjustment of the front - end per se as opposed to doing putting vtln in is it was a couple percent . right ? it was it was there was one thing that was one and a half percent and one that was point eight . so and let me see if i remember what they were . one of them was , the change to , because it did it all at once , to , from bark scale to mel scale , which i really feel like saying in quotes , because @ they 're essentially the same scale but the but any i individual particular implementation of those things puts things in a particular place . professor f: so that 's why i wanted to look i still have n't looked at it yet . i wanna look at exactly where the filters were in the two , and it 's probably something like there 's one fewer or one more filter in the sub one kilohertz band and for whatever reason with this particular experiment it was better one way or the other . , it could be there 's something more fundamental but it , i it yet . and the other and the other that was like one and a half , and then there was point eight percent , which was what was the other thing ? professor f: we d were n't able to separate them out cuz it was just done in one thing . but then there was a point eight percent which was something else . professor f: do you remember the ? , . so that was , that one claim credit for , i in terms of screwing it up in the first place . so that someone e until someone else fixed it , which is that , i never put when i u we had some problems before with offsets . this inf this went back to , wall street journal . so we had , ea everybody else who was doing wall street journal knew that there were big dc offsets in th in these data in those data and nobody happened to mention it to us , and we were getting these , like , really terrible results , like two , three times the error everybody else was getting . and then in casual conversation someone ment mentioned " , i , you 're taking care of the offsets . " i said " what offsets ? " and at that point , we were pretty new to the data and we 'd never really , like , looked at it on a screen and then when we just put it on the screen and wroop ! there 's this big dc offset . so , in plp professor f: no . it 's just , it 's not uncommon for recorded electronics to have different , dc offsets . professor f: it 's it 's , no big deal . it 's , you could have ten , twenty , maybe thirty millivolts , whatever , and it 's consistently in there . , most people 's front - ends have pre - emphasis with it , with zero at zero frequency , so that it 's irrelevant . , but with p l p , we did n't actually have that . we had we had the equivalent of pre - emphasis in a , fletcher - munson style weighting that occurs in the middle of p l but it does n't actually have a zero at zero frequency , like , , typical simple fr pre - emphasis does . we had something more fancy . it was later on it did n't have that . so at that point i reali " sh we better have a high - pass filter " just , just take care of the problem . so i put in a high - pass filter at , ninety hertz or so , for a sixteen kilohertz sampling rate . and i never put anything in to adjust it for different sampling rates . and so , the code does n't know anything about that and so this is all at eight kilohertz and so it was at forty - five hertz instead of at ninety . so , i if dan fixed it or , what he professor f: he made it a parameter . so . , i if he did it right , he did fix it and then it 's taking care of sampling rate , which is great . phd c: u and but hpf , when you put a number after it , uses that as the hertz value of the cut - off . professor f: , frankly , we never did that with the rasta filter either , so the rasta filter is actually doing a different thing in the modulation spectral domain depending on what sampling rate you 're doing , which is another old bug of mine . but , . so that was the problem there was th we had always intended to cut off below a hundred hertz and it just was n't doing it , so now it is . so , that hep that helped us by , like , eight tenths of a percent . it still was n't a big deal . phd c: ok . , but , , again , after completing the current experiments , we 'll we can add up all the differences professor f: but but , i my point was that , the hybrid system thing that we did was , primitive in many ways . professor f: and i agree with you that if we fixed lots of different things and they would all add up , we would probably have a competitive system . but not that much of it is due to the front - end per se . maybe a couple percent of it is , as far as see from this . , unless you call , if you call vtl the front - en front - end , that 's , a little more . but that 's more both , . phd d: one experiment we should we 'll probably need to do though when , at some point , is , since we 're using that same the net that was trained on plp without all these things in it , for the tandem system , we may wanna go back and retrain , phd d: , for the tandem . , so we can see if it what effect it has on the tandem processing . phd c: so so , do we expect ? at this point i ' m as , e i ' m wondering is it can we expect , a tandem system to do better than a properly trained , a gaussian system trained directly on the features with , the right ch choice of parameters ? professor f: , that 's what we 're seeing in other areas . yes . so , it 's so , phd d: so , we but but we may not . , if it does n't perform as , we may not know why . cuz we need to do the exact experiment . professor f: , the reason to think it should is because you 're putting in the same information and you 're transforming it to be more discriminative . so . now , in some databases i would n't expect it to necessarily give you much and part of what i view as the real power of it is that it gives you a transformational capability for taking all sorts of different wild things that we do , not just th the standard front - end , but other things , like with multiple streams and , and allows you to feed them to the other system with this through this funnel . , so i think that 's the real power of it . i would n't expect huge in huge improvements . , but it should at least be roughly the same and maybe a little better . if it 's , like way worse then , phd d: so , morgan , an another thing that andreas and i were talking about was , so @ in the first experiment that he did we just took the whole fifty - six , outputs and that 's , compared to a thirty - nine input feature vector from either mfcc or plp . but one thing we could do is professor f: let let me just ask you something . when you say take the fifty - six outputs , these are the pre final nonlinearity outputs phd d: that 's what we did . so one thing we were wondering is , if we did principal components and , say , took out just thirteen , and then did deltas and double - deltas on that phd d: so we treated the th first thirteen as though they were standard features . , did dan do experiments like that to ? professor f: . talk with stephane . he did some things like that . it was either him or carmen . i forget . professor f: these were all different databases and different , in htk and all that , so i it may not apply . but my recollection of it was that it did n't make it better but it did n't make it worse . but , again , given all these differences , maybe it 's more important in your case that you not take a lot of these low - variance , components . phd d: cuz in a sense , the net 's already got quite a bit of context in those features , so if we did deltas and double - deltas on top of those , we 're getting even more . phd c: but there the main point is that , , it took us a while but we have the procedure for coupling the two systems debugged now and , there 's still conceivably some bug somewhere in the way we 're feeding the tandem features , either generating them or feeding them to this to the sri system , but it 's phd c: and i ' m wondering how we can debug that . how i ' m actually f quite that the feeding the features into the system and training it up , phd c: that 's this that 's essentially the same as we use with the ce with the p l p fe features . and that 's working great . so . i . phd c: there we could the another degree of freedom is how do you generate the k l t transform ? we to professor f: , and another one is the normalization of the inputs to the net . these nets are trained with particular normalization and when that gets screwed up it can really hurt it . phd d: i ' m doing what eric e eric coached me through then that part of it , so i ' m pretty confident in that . , the only slight difference is that i use normalization values that , andreas calculated from the original plp , which is right . n so , i u i do , we actually do n't do that normalization for the plp , do we ? for the st just the straight plp features ? phd c: so , there 's there is room for bugs that we might not have discovered , professor f: . i would actually double check with stephane at this point , cuz he 's probably the one here , he and dan are the ones who are at this point most experienced with the tandem thing and there may be some little bit here and there that is not being handled right . phd d: . it 's hard with features , cuz you what they should look like . , you ca n't just , like , print the values out in ascii and , look at them , see if they 're phd g: , and also they 're not , as i understand it , you do n't have a way to optimize the features for the final word error . , these are just discriminative , but they 're not , optimized for the final phd g: right . so it there 's always this question of whether you might do better with those features if there was a way to train it for the word error metric that you 're actually professor f: so wha w what an and you may not be in this case , come to think of it , because , you 're just taking something that 's trained up elsewhere . so , what you do in the full procedure is you , , have an embedded training . so you the net is trained on , a , viterbi alignment of the training data that comes from your full system . and so that 's where the feedback comes all around , so that it is actually discriminant . you can prove that it 's a , if you believe in the viterbi assumption that , getting the best path , is almost equivalent to getting the best , total probability , then you actually do improve that by , by training up on local , local frames . but , we are n't actually doing that here , because we did that for a hybrid system , and now we 're plugging it into another system and so it is n't i it would n't quite apply here . phd d: so another huge experiment we could do would be to take the tandem features , do sri forced alignments using those features , and then re - do the net with those . professor f: . another thing is since you 're not using the net for recognition per se but just for this transformation , it 's probably bigger than it needs to be . so that would save a lot of time . phd c: and there 's a mismatch in the phone sets . so , you 're using a l a long a larger phone set than what professor f: the other thing , just to mention that stephane this was an innovation of stephane 's , which was a pretty neat one , and might particularly apply here , given all these things we 're mentioning . , stephane 's idea was that , discriminant , approaches are great . even the local ones , given , these potential outer loops which , you can convince yourself turn into the global ones . , however , there 's times when it is not good . , when something about the test set is different enough from the training set that , the discrimination that you 're learning is not a good one . so , his idea was to take as the input feature vector to the , gaussian mixture system , a concatenation of the neural net outputs and the regular features . phd c: no , but we when i first started corresponding with dan about how to go about this , that was one of the things that we definitely went there . professor f: . , i ' m that stephane was n't the first to think of it , but actually stephane did it phd c: and do you do a klt transform on the con on the combined feature vector ? professor f: , actually , i , you should check with him , because he tried several different combinations . phd c: because you end up with this huge feature vector , so that might be a problem , a unless you do some form of dimensionality reduction . professor f: . i , th what i do n't remember is which came out best . so he did one where he put o put e the whole thing into one klt , and another one , since the plp things are already orthogonalized , he left them alone and just did a klt on the net outputs and then concatenated that . and i do n't remember which was better . phd d: did he did he try to ? so he always ended up with a feature vector that was twice as long as either one of the ? phd g: we need to close up cuz i need to save the data and , get a call . professor f: i g , given that we 're in a hurry for snacks , maybe we should do them together . phd g: should we just ? , are we trying to do them { nonvocalsound } in synchrony ? that might be fun . professor f: but we could just , see if we find a rhythm , what , o 's or zeroes , we wanna agree on that ? professor f: why do n't we do zer i anyone have a problem with saying zero ? is zero ok ?
the berkley meeting recorder project is well underway , and this meeting discusses the progress and ongoing issues. a pressing concern for the group is the darpa meeting in july , which is only a short time away , and for which they would like to have some progress. specifically , the group would like to have transcripts available , which would mean resolving legal issues for data use and on the basis of feedback from ibm get more transcription underway. additionally they would also like to have the question answering mock-up and transcriber interface ready for then. plp results for the front-end look good , with the group also reporting progress in segmentation: thilo's segmenter will now be used and ways of improving performance investigated; the classifier segmentation is progressing well , especially in the use of prosody for identifying interruption. work on the front end continues , with improvements of 3-5% being made. the group discussed how the digits should be recorded in the meeting. in the end they decided to record these in unison for all of the meeting participants as a whole. to improve the performance of thilo's automatic segmenter , this is going to be retrained and adapted to run with thilo's posteriors and speaker background models. regarding transcription , no new transcribers will be employed until situation regarding ibm is clarified. legal issues surrounding the approval and signing off of transcripts by participants has proved to be very complicated , and so will be sorted out off line by those involved by july. after finding discrepancies with the cmu researchers , the icsi group have decided to tune the size of their gaussian system. after raising the difficulty of checking for bugs in their generation of tandem features , they decide to check with stephane who has more experience of these procedures. for the darpa meeting in july , the group propose that they should have the question answering mock-up and transcriber interface ready for then , and also have data available. unfortunately , there are legal issues regarding the approval of transcripts. additionally , the group would like to have their data transcriptions in "production mode" by then. however the group do not want to hire more transcribers until ibm confirms in the next 2-3 weeks the acceptability of the data. segmentation for the recogniser has been done by hand which the group consider "cheating" , instead now they want to use thilo's automatic segmenter. the classifier segmentation work is going well , but needs more data to improve results since non-native speaker data cannot be used. for the front-end , so far the group have been using a high number of gaussians per cluster ( 64 ) rather than the ten per cluster used by researchers at cmu , therefore they need to tune their gaussian system to the feature vector. the group observed that it would be difficult to check for bugs in the generation of tandem features for the sri system. experimentation is taking place using different front-ends with the sri recogniser. this is not yet complete , but plp results are improving to match those of mfcc , with vocal tract length normalisation working "beautifully" on a training set of 24 hours , and giving overall improvement of between 3 and 5%. thilo's automatic segmenter is now working , and although it has low precision , this is mediated by the high recall. the group will send ibm another sample file to check that the beep problems are fixed , and this should take 2-3 weeks. progress on transcriptions has been made on 5 "set one" meetings , and two more transcribers set on. pre-segmentation has proved useful. meeting recorder data of the 62 hours of meetings already analysed has been organised into a spreadsheet with the aim to make this available over the www. classifier segmentation is expected to give better results from more data: currently "cheating" using word features for forced alignment , but looking to use other data such as "spurts". prosodaic features looking promising for identifying interruptions. generally the icsi data offers better pitch features and vowel voicing than the switchboard corpus due to the use of close talking mikes rather than telephone handsets.
###dialogue: phd g: ok . your channel number 's already on this blank sheet . so you just if you can phd g: but if you think it 's . it 's a default . but set it higher if you like . professor f: it 's not showing much . test , test , test , test . ok , that seems better ? ok , good . , that 's good . that 's ahh . mmm . so i had a question for adam . have we started already ? professor f: . great idea . i was gon na ask adam to , say if he thought anymore about the demo because it occurred to me that this is late may and the darpa meeting is in mid july . , but i do n't remember w what we i know that we were gon na do something with the transcriber interface is one thing , but there was a second thing . anybody remember ? phd g: , we were gon na do a mock - up , like , question answering , that was separate from the interface . do you remember ? remember , like , asking questions and retrieving , but in a pre - stored fashion . professor f: alright . so anyway , you have to sort out that out and get somebody going on it cuz we 're got a month left . so . professor f: ok . so , what are we g else we got ? you got you just wrote a bunch of . phd g: no . that was all , previously here . i was writing the digits and then i realized i could xerox them , phd g: because i did n't want people to turn their heads from these microphones . so . we all , have the same digit form , for the record . professor f: that 's . so , the choice is , which do we want more , the comparison , of everybody saying them at the same time or the comparison of people saying the same digits at different times that ? phd g: , it actually it might be good to have them separately and have the same exact strings . , we could use them for normalizing , but it goes more quickly doing them in unison . phd e: but anyway , they wo n't be identical as somebody is saying zero in some sometimes , saying o , and so , it 's not i not identical . professor f: really boring chorus . do we have an agenda ? adam usually tries to put those together , but he 's ill . professor f: is there that 's happened about , , the sri recognizer et cetera , tho those things that were happening before with ? y y you guys were doing a bunch of experiments with different front - ends and then with is is that still where it was , the other day ? phd d: now the you saw the note that the plp now is getting the same as the mfcc . right ? phd c: . actually it looks like it 's getting better . so . but but it 's not phd c: but , that 's not d directly related to me . does n't mean we ca n't talk about it . , it seems it looks l i have n't the it 's the experiment is still not complete , but , it looks like the vocal tract length normalization is working beautifully , actually , w using the warp factors that we computed for the sri system and just applying them to the icsi front - end . phd c: just had to take the reciprocal of the number because they have different meanings in the two systems . phd c: but one issue actually that just came up in discussion with liz and don was , as far as meeting recognition is concerned , we would really like to , move , to , doing the recognition on automatic segmentations . because in all our previous experiments , we had the , we were essentially cheating by having the , , the h the hand - segmentations as the basis of the recognition . and so now with thilo 's segmenter working so , we should consider doing a phd e: that - that 's what i wanted to do anyway , so we should just get together and phd g: and even the good thing is that since you , have high recall , even if you have low precision cuz you 're over - generating , that 's good because we could train noise models in the recognizer for these kinds of , transients and things that come from the microphones , phd g: but i know that if we run recognition unconstrained on a whole waveform , we do very poorly because we 're getting insertions in places what that you may be cutting out . so we do need some pre - segmentation . phd c: we should we should consider doing some extra things , like , , retraining or adapting the models for background noise to the to this environment , . phd g: right now they 're discrete , yes or no for a speaker , to consider those particular speaker background models . there 's lots of ins interesting things that could be done . phd d: . so , talked with brian and gave him the alternatives to the single beep at the end of each utterance that we had generated before . phd d: the chuck chunks . right . and so he talked it over with the transcriber and the transcriber thought that the easiest thing for them would be if there was a beep and then the nu a number , a digit , and then a beep , at the beginning of each one and that would help keep them from getting lost . and , so adam wrote a little script to generate those style , beeps phd d: and so we 're i came up here and just recorded the numbers one through ten . phd d: . he then he d i recorded actually , i recorded one through ten three times at three different speeds and then he picked . phd d: he liked the fastest one , so he just cut those out and spliced them in between , two beeps . phd e: it will be funny when you 're really reading digits , and then there are the chunks with your digits in ? phd d: . ! maybe . and she said it was n't gon na the transcriber said it would n't be a problem cuz they can actually make a template , that has beep , number , beep . so for them it 'll be very quick to put those in there when they 're transcribing . so , we we 're gon na send them one more sample meeting , and thilo has run his segmentation . adam 's gon na generate the chunked file . and then , i 'll give it to brian and they can try that out . and when we get that back we 'll see if that fixes the problem we had with , too many beeps in the last transcription . professor f: ok . do w do what do you have any idea of the turn - around on those steps you just said ? phd d: , i . the last one seemed like it took a couple of weeks . , maybe even three . , that 's just the i b m side . our side is quick . , i . how long does your ? professor f: , i meant the overall thing . e u the reason i ' m asking is because , jane and i have just been talking , and she 's just been doing . , e a , further hiring of transcribers . professor f: and so we do n't really know exactly what they 'll be doing , how long they 'll be doing it , and , because right now she has no choice but to operate in the mode that we already have working . and , so it 'd be it 'd be good to get that resolved , soon as we could , phd d: i , i hope @ we can get a better estimate from this one that we send them . professor f: in particular i would really hope that when we do this darpa meeting in july that we have we 're into production mode , somehow , that we actually have a stream going and we know how it does and how it operates . that would certainly be a very good thing to know . professor f: maybe before we do the meeting info organize thing , maybe you could say relevant about where we are in transcriptions . postdoc a: ok . so , we , the transcribers have continued to work past what i ' m calling " set one " , which was the s the set that i ' ve been , ok , talking about up to this point , but , they ' ve gotten five meetings done in that set . right now they 're in the process of being edited . , the , let 's see , i hired two transcribers today . i ' m thinking of hiring another one , which will because we ' ve had a lot of attrition . and that will bring our total to postdoc a: so , one of them had a baby . , one of them really w was n't planning postdoc a: , one of them , had never planned to work past january . , it 's th all these various things , cuz we , we presented it as possibly a month project back in january and , so it makes sense . , through attrition we ' ve we 're down to two , but they 're really solid . we 're really lucky the two that we kept . and , i do n't mean anything against the others . what is we ' ve got a good core . no . we had a good core phd g: , they wo n't hear this since they 're going . they wo n't be transcribing this meeting . postdoc a: , but still . , i d it 's just a matter of we w we 're we ' ve got , postdoc a: two of the ones who , ha had been putting in a lot of hours up to this point and they 're continuing to put in a lot of hours , which is wonderful , and excellent work . and so , then , in addition , i hired two more today and i ' m planning to h hire a third one with this within this coming week , but the plan is just as , morgan was saying we discussed this , and the plan right now is to keep the staff on the leaner side , rather than hiring , like , eight to ten right now , because if the ibm thing comes through really quickly , then , we would n't wanna have to , , lay people off and . so . and this way it 'll , i got really a lot of response for my notice and i could hire additional people if i wish to . professor f: . an - and the other thing is , in the unlikely event and since we 're so far from this , it 's a little hard to plan this way in the unlikely event that we actually find that we have , transcribers on staff who are twiddling their thumbs because , there 's , all the that was sitting there has been transcribed and they 're faster the pipeline is faster than , than the generation , i in the day e event that day actually dawns , i bet we could find some other for them to do . professor f: so that , a as we were talking , if we hire twelve , then we could , run into a problem later . , we also just could n't sustain that forever . but but , for all sorts of reasons but if we hire f , f we have five on staff five or six on staff at any given time , then it 's a small enough number so we can be flexible either way . phd g: it 'd be great , too , if , we can we might need some help again getting the tighter boundaries or some hand to experiment with , to have a ground truth for this segmentation work , which i you have some already that was really helpful , and we could probably use more . phd e: mmm , . that was a thing i planned working on , is , to use the transcriptions which are done by now , and to use them as , phd e: and to use them for training a or for fo whatever . to to create some speech - nonspeech labels out of them , and , but that 's a thing w was w what i ' m just looking into . postdoc a: the the pre - segmentations are so much are s so extremely helpful . now there was , i g so , a couple weeks ago i needed some new ones and it happened to be during the time that he was on vacation f for just very few days you were away . but it happened to be during that time i needed one , so i started them on the non - pre - segmented and then switched them over to yours and , they always appreciate that when they have that available . and he 's , usually , , . so they really appreciate it . but i was gon na say that they do adjust it once in a while . , once in a while there 's something like , postdoc a: , and e actually you talked to them . did n't you ? did you ? have you ? postdoc a: and and she was and so , i asked her , they 're very perceptive . i really want to have this meeting of the transcribers . i have n't done it yet , but i wanna do that and she 's out of town , for a couple of weeks , but i wanna do that when she returns . , cuz she was saying , in a span of very short period we asked it seems like the ones that need to be adjusted are these things , and she was saying the short utterances , the , postdoc a: , you 're you 're aware of this . but but actually i it 's so correct for so much of the time , that it 's an enormous time saver and it just gets tweaked a little around the boundaries . phd g: is there actually a record of where they change ? , you can compare , do a diff on the just so that we knew postdoc a: you could do it . it 's it 's complicated in that , hhh , i phd e: actually , when they create new , new segments , it will be , not that easy but . one could do that . phd g: , if we keep a old copy of the old time marks just so that if we run it we know whether we 're which ones were cheating postdoc a: there is a there is one problem with that and that is when they start part way through then what i do is i merge what they ' ve done with the pre - segmented version . postdoc a: so it 's not a pure condition . wha - what you 'd really like is that they started with pre - segmented and were pre - segmented all the way through . postdoc a: and , @ i , the it was n't possible for about four of the recent ones . but , it will be possible in the future because we 're , . phd g: mmm , that 's great . as long as we have a record , i , of the original automatic one , we can always find out how we would do fr from the recognition side by using those boundaries . , a completely non - cheating version . also if you need someone to record this meeting , i ' m happy to for the transcribers i could do it , or chuck or adam . phd d: . so , , jane and adam and i had a meeting where we talked about the reorganization of the directory structure for all of the meeting phd d: no . for all the meeting recorder data . we should have . and so we ' ve got a plan for what we 're gon na do there . and then , jane also s prepared a , started getting all of the meetings organized , so she prepared a spreadsheet , which i spent the last couple of days adding to . so i went through all of the data that we have collected so far , and have been putting it into , a spreadsheet with start time , the date , the old meeting name , the new meeting name , the number of speakers , the duration of the meeting , comments , what its transcription status is , all that . and so , the idea is that we can take this and then export it as html and put it on the meeting recorder web page so we can keep people updated about what 's going on . phd d: , i ' ve got ta get some more information from jane cuz i have some gaps here that i need to get her to fill in , but so far , as of monday , the fourteenth , we ' ve had a total number of meeting sixty - two hours of meetings that we have collected . and , some other interesting things , average number of speakers per meeting is six . , and i ' m gon na have on here the total amount that 's been transcribed so far , but i ' ve got a bunch of , that 's what i have to talk to jane about , figuring out exactly which ones have been completed and . but , this 'll be a thing that we can put up on the web site and people can be informed of the status of various different ones . and it 'll also list , like under the status , if it 's at ibm or if it 's at icsi , or if it 's completed or which ones we 're excluding and there 's a place for comments , so we can , say why we 're excluding things and . professor f: now would the ones that , are already transcribed we h we have enough there that c , we ' ve already done some studies and , should n't we go through and do the business - es u of having the , , participants approve it , for approve the transcriptions for distribution and ? postdoc a: , interesting idea . in principle , i would say yes , although i still am doing some the final - pass editing , trying to convert it over to the master file as the being the channelized version and it 's , it seems like i get into that a certain way and then something else intervenes and i have to stop . cleaning up the things like the , places where the transcriber was uncertain , and doing spot - checking here and there . so , , i it would make sense to until th that 's done , but professor f: , le let me put in another a milestone as i did with the , the pipeline . , we are gon na have this darpa meeting in the middle of july , professor f: and it w it 'd be given that we ' ve been we ' ve given a couple public talks about it already , spaced by months and months , it 'd be pretty bad if we continued to say none of this is available . professor f: right . so we can s we wanna be able to say " here is a subset that is available right now " phd c: and they do n't have to approve , th an edited version , they can just give their approval to whatever version professor f: , in principle , yes . but , i if somebody actually did get into some legal issue with it then we phd c: bu . but th , the editing will continue . presumably if s errors are found , they will be fixed , but they wo n't change the content of the meetings . phd g: , i if jane is clarifying question , then , how can they agree to it before they know her final version ? postdoc a: the other thing , too , is there can be subtleties where a person uses this word instead of that word , which @ could ' ve been transcribed in the other way . postdoc a: and no and they would n't have been slanderous if it had been this other word . ? professor f: i it , there is a point at which i agree it becomes ridiculous because , you could do this final thing and then a year from now somebody could say , that should be a period and not a question mark . right ? and you do n't you there 's no way that we 're gon na go back and ask everybody " do you approve this , this document now ? " so what it is that the thing that they sign i have n't looked at it in a while , but it has to be open enough that it says " ok , from now on , now that i ' ve read this , you can use do anything you want with these data . " and , but , i we wanna so , assuming that it 's in that wording , which i do n't remember , i we just wanna have enough confidence ourselves that it 's so close to the final form it 's gon na be in , a year from now that they 're postdoc a: i agree . mmm . i agree . it 's just , a question of , if the person is using the transcript as the way of them judging what they said and whether it was slanderous , then it seems like it 's i it needs to be more correct than if we could count on them re - listening to the meeting . because it becomes , in a way a f , a legal document i if they ' ve to that . professor f: , i forget how we end right . i forget how we ended up on this , but i remember my taking the position of not making it so easy for everybody to observe everything and adam was taking the position of having it be really straightforward for people to check every aspect of it including the audio . and i do n't remember who won , adam or me , but postdoc a: , if it 's only the transcript , though , th this is my point , that professor f: , the , that 's why i ' m bringing this up again , because i ca n't remember how we ended up . professor f: that it was the transcrip he wanted to do a web interface that would make it professor f: that would give you access to the transcript and the audio . that 's what adam wanted . and i do n't remember how we ended up . phd g: , with the web interface it 's interesting , because you could allow the person who signs to be informed when their transcript changes , like that . and , i would say " no " . like , i do n't wanna know , but some people might be really interested and then y in other words , they would be informed if there was some significant change other than typos and things like that . phd g: , i what happened to the small heads thing , but i j , i ' m just saying that , like , you can say that any things that are deemed phd g: anyway . , i agree that at some point people probably wo n't care about typos but they would care about significant meaning changes and then they could be asked for their consent , i , if those change . cuz assumi assuming we do n't really distribute things that have any significant changes from what they sign anyway . phd c: we just have to give them a chance to listen to it , and if they do n't , that 's their problem . postdoc a: unfortunately , in the sign thing that they signed , it says " transcripts " . phd g: i that 's a lot to ask for people that have been in a lot of meetings . professor f: w anyway , have n't we ' ve gone down this path a number of times . i know this can lead to extended conversations and not really get anywhere , so let me just suggest that , off - line that , the people involved figure it out and take care of it before it 's july . ok . so so that in july we can tell people " yes , we have this and you can use it " . professor f: so , let 's see . what else we got ? . don did a report about his project in class and , an oral and written version . so that was he was doing with you . phd g: , it 's i one thing we 're learning is that the amount we have eight meetings there because we could n't use the non - native all non - native meetings and it 's , probably below threshold on enough data for us for the things we 're looking at because the prosodic features are very noisy and so you need a lot of data in order to model them . , so we 're starting to see some patterns and we 're hoping that maybe with , i , double or triple the data with twenty meetings or so , that we would start to get better results . but we did find that some of the features that , i gue jane would know about , that are expressing the distance of , boundaries from peaks in the utterance and some local , range pitch range effects , like how close people are to their floor , are showing up in these classifiers , which are also being given some word features that are cheating , cuz they 're true words . , so these are based on forced alignment . word features like , word frequency and whether or not something 's a backchannel and . so , we 're starting to see , some interesting patterns . professor f: so the dominant features , including everything , were those quasi - cheating things . where these are grad b: . sometimes positions in sentences , or in spurts , was helpful . i if that 's cheating , too . phd g: but roughly speaking , the recognized words are gon na give you a similar type of position . phd g: y it should be . , we and actually that 's one of the things we 're interested in doing , is a grad b: ti just p time position , like when the word starts ? i if that was in the phd g: and it depends on speaking rate . . that 's actually why i did n't use it at first . but we one of the interesting things was i you reported on some te punctuation type finding sentence boundaries , finding disfluency boundaries , and then i had done some work on finding from the foreground speech whether or not someone was likely to interrupt , so where , if i ' m talking now and someone and andreas is about to interrupt me , is he gon na choose a certain place in my speech , either prosodically or word - based . and there the prosodic features actually showed up and a neat thing even though the word features were available . and a neat thing there too is i tried some putting the speaker so , i gave everybody a short version of their name . so the real names are in there , which we could n't use . , we should use i ds . and those do n't show up . so that means that overall , it was n't just modeling morgan , or it was n't just modeling a single person , but was trying to , get a general idea the model the tree classifier was trying to find general locations that were applicable to different speakers , even though there are huge speaker effects . the but the main limitation now is i because we 're only looking at things that happen every ten words or every twenty words , we need more data and more data per speaker . it 'd also be interesting to look at the edu meetings because we did include meeting type as a feature , so whether you were in a r meeting recorder meeting or a robustness meeting did matter to interrupts because there are just fewer interrupts in the robustness meetings . phd g: and so the classifier learns more about morgan than it does about the average person , which is not bad . it 'd probably do better than , but it was n't generalizing . so it 's and don , we have a long list of things he 's starting to look at now over the summer , where we can and he 'll be able to report on more things in the future . but it was great that we could at least go from the , jane 's transcripts and the , recognizer output and get it to this point . and it 's something mari can probably use in her preliminary report like , " , we 're at the point where we 're training these classifiers and we 're just reporting very preliminary but suggestive results that some features , both word and pro prosodic , work . " the other thing that was interesting to me is that the pitch features are better than in switchboard . and that really is from the close - talking mikes , cuz the pitch processing that was done has much cleaner behavior than the switchboard telephone bandwidth . phd g: . , first of all , the pitch tracks are m have less , halvings and doublings than switchboard and there 's a lot less dropout , so if you ask how many regions where you would normally expect some vowels to be occurring are completely devoid of pitch information , in other words the pitch tracker just did n't get a high enough probability of voicing for words for , five word there are much fewer than in switchboard . so the missing we had a big missing data problem in switchboard and , so the features were n't as reliable cuz they were often just not available . phd d: could it have to do with the lower frequency cut - off on the switchboard ? phd g: so that 's actually good . ma - maybe . , the tele we had telephone bandwidth for switchboard and we had the an annoying telephone handset movement problem that may also affect it . so we 're just getting better signals in this data . which is . anyway , don 's been doing a great job and we hope to continue with , andreas 's help and also some of thilo 's help on this , phd g: to try to get a non - cheating version of how all this would work . professor f: has has , ? we just , just talked about this the other day , but h has anybody had a chance to try changing , insertion penalty things with the , using the tandem system input for the ? phd c: there were a little the relative number of there were a higher number of deletions , actually . so , you , so , actually it preferred to have a positive er , negative insertion penalty , phd c: which means that , but , it did n't change th the by adjusting that the , . the error changed by probably one percent or so . but , given that word error rate is so high , that 's not a phd c: that 's not the problem . no . but , we s just , , chuck and i talked and the @ next thing to do is probably to tune the , the size of the gaussian system , @ to this feature vector , which we have n't done . we just used the same configuration as we used for the standard system . and , , dan @ dan just sent me a message saying that cmu used , something like ten gaussians per cluster , each mixture has ten gaussians phd c: so that 's a big difference and it might be way off and give very poorly trained , , gaussians that way , an and poorly trained mixture weights . so so , we have the turn - around time on the training when we train only the a male system with , , our small training set , is less than twenty - four hours , so we can run lots of , just brute force , try a whole bunch of different , settings . and , with the new machines it 'll be even better . phd c: but the plp features work , , continue to improve the , as i said before , the using dan 's , vocal tract normalization option works very . so , @ i ran one experiment where we 're just did the vocal tract le normalization only in the test data , so i did n't bother to retrain the models , and it improved by one percent , which is about what we get with , just @ actually doing both training and test normalization , with , the , with the standard system . so , in a few hours we 'll have the numbers for the for retraining everything with vocal tract length normalization and so , that might even improve it further . so , it looks like the p l - fea p features do very now with after having figured out all these little tricks to get it to work . phd g: . so you mean you improve one percent over a system that does n't have any v t l in it already ? professor f: ok . so then we 'll have our baseline to compare the currently hideous , new thing with . phd c: right . a right . and and what that suggests also is that the current switchboard mlp is n't trained on very good features . phd c: , because it was trained on whatever , was used , last time you did hub - five , which did n't have any of the professor f: right . but all of these effects were j like a couple percent . , y the phd c: , but if you add them all up you have , almost five percent difference now . professor f: add all of them . one was one point five percent and one was point eight . phd c: , actually , and it 's , what 's actually qu interesting is that with , you m prob maybe another half percent if you do the vtl in training , and then interestingly , if you optimize you get more of a win out of rescoring the , , the n best lists , and optimizing the weights , than professor f: . but the part that 's actually adjustment of the front - end per se as opposed to doing putting vtln in is it was a couple percent . right ? it was it was there was one thing that was one and a half percent and one that was point eight . so and let me see if i remember what they were . one of them was , the change to , because it did it all at once , to , from bark scale to mel scale , which i really feel like saying in quotes , because @ they 're essentially the same scale but the but any i individual particular implementation of those things puts things in a particular place . professor f: so that 's why i wanted to look i still have n't looked at it yet . i wanna look at exactly where the filters were in the two , and it 's probably something like there 's one fewer or one more filter in the sub one kilohertz band and for whatever reason with this particular experiment it was better one way or the other . , it could be there 's something more fundamental but it , i it yet . and the other and the other that was like one and a half , and then there was point eight percent , which was what was the other thing ? professor f: we d were n't able to separate them out cuz it was just done in one thing . but then there was a point eight percent which was something else . professor f: do you remember the ? , . so that was , that one claim credit for , i in terms of screwing it up in the first place . so that someone e until someone else fixed it , which is that , i never put when i u we had some problems before with offsets . this inf this went back to , wall street journal . so we had , ea everybody else who was doing wall street journal knew that there were big dc offsets in th in these data in those data and nobody happened to mention it to us , and we were getting these , like , really terrible results , like two , three times the error everybody else was getting . and then in casual conversation someone ment mentioned " , i , you 're taking care of the offsets . " i said " what offsets ? " and at that point , we were pretty new to the data and we 'd never really , like , looked at it on a screen and then when we just put it on the screen and wroop ! there 's this big dc offset . so , in plp professor f: no . it 's just , it 's not uncommon for recorded electronics to have different , dc offsets . professor f: it 's it 's , no big deal . it 's , you could have ten , twenty , maybe thirty millivolts , whatever , and it 's consistently in there . , most people 's front - ends have pre - emphasis with it , with zero at zero frequency , so that it 's irrelevant . , but with p l p , we did n't actually have that . we had we had the equivalent of pre - emphasis in a , fletcher - munson style weighting that occurs in the middle of p l but it does n't actually have a zero at zero frequency , like , , typical simple fr pre - emphasis does . we had something more fancy . it was later on it did n't have that . so at that point i reali " sh we better have a high - pass filter " just , just take care of the problem . so i put in a high - pass filter at , ninety hertz or so , for a sixteen kilohertz sampling rate . and i never put anything in to adjust it for different sampling rates . and so , the code does n't know anything about that and so this is all at eight kilohertz and so it was at forty - five hertz instead of at ninety . so , i if dan fixed it or , what he professor f: he made it a parameter . so . , i if he did it right , he did fix it and then it 's taking care of sampling rate , which is great . phd c: u and but hpf , when you put a number after it , uses that as the hertz value of the cut - off . professor f: , frankly , we never did that with the rasta filter either , so the rasta filter is actually doing a different thing in the modulation spectral domain depending on what sampling rate you 're doing , which is another old bug of mine . but , . so that was the problem there was th we had always intended to cut off below a hundred hertz and it just was n't doing it , so now it is . so , that hep that helped us by , like , eight tenths of a percent . it still was n't a big deal . phd c: ok . , but , , again , after completing the current experiments , we 'll we can add up all the differences professor f: but but , i my point was that , the hybrid system thing that we did was , primitive in many ways . professor f: and i agree with you that if we fixed lots of different things and they would all add up , we would probably have a competitive system . but not that much of it is due to the front - end per se . maybe a couple percent of it is , as far as see from this . , unless you call , if you call vtl the front - en front - end , that 's , a little more . but that 's more both , . phd d: one experiment we should we 'll probably need to do though when , at some point , is , since we 're using that same the net that was trained on plp without all these things in it , for the tandem system , we may wanna go back and retrain , phd d: , for the tandem . , so we can see if it what effect it has on the tandem processing . phd c: so so , do we expect ? at this point i ' m as , e i ' m wondering is it can we expect , a tandem system to do better than a properly trained , a gaussian system trained directly on the features with , the right ch choice of parameters ? professor f: , that 's what we 're seeing in other areas . yes . so , it 's so , phd d: so , we but but we may not . , if it does n't perform as , we may not know why . cuz we need to do the exact experiment . professor f: , the reason to think it should is because you 're putting in the same information and you 're transforming it to be more discriminative . so . now , in some databases i would n't expect it to necessarily give you much and part of what i view as the real power of it is that it gives you a transformational capability for taking all sorts of different wild things that we do , not just th the standard front - end , but other things , like with multiple streams and , and allows you to feed them to the other system with this through this funnel . , so i think that 's the real power of it . i would n't expect huge in huge improvements . , but it should at least be roughly the same and maybe a little better . if it 's , like way worse then , phd d: so , morgan , an another thing that andreas and i were talking about was , so @ in the first experiment that he did we just took the whole fifty - six , outputs and that 's , compared to a thirty - nine input feature vector from either mfcc or plp . but one thing we could do is professor f: let let me just ask you something . when you say take the fifty - six outputs , these are the pre final nonlinearity outputs phd d: that 's what we did . so one thing we were wondering is , if we did principal components and , say , took out just thirteen , and then did deltas and double - deltas on that phd d: so we treated the th first thirteen as though they were standard features . , did dan do experiments like that to ? professor f: . talk with stephane . he did some things like that . it was either him or carmen . i forget . professor f: these were all different databases and different , in htk and all that , so i it may not apply . but my recollection of it was that it did n't make it better but it did n't make it worse . but , again , given all these differences , maybe it 's more important in your case that you not take a lot of these low - variance , components . phd d: cuz in a sense , the net 's already got quite a bit of context in those features , so if we did deltas and double - deltas on top of those , we 're getting even more . phd c: but there the main point is that , , it took us a while but we have the procedure for coupling the two systems debugged now and , there 's still conceivably some bug somewhere in the way we 're feeding the tandem features , either generating them or feeding them to this to the sri system , but it 's phd c: and i ' m wondering how we can debug that . how i ' m actually f quite that the feeding the features into the system and training it up , phd c: that 's this that 's essentially the same as we use with the ce with the p l p fe features . and that 's working great . so . i . phd c: there we could the another degree of freedom is how do you generate the k l t transform ? we to professor f: , and another one is the normalization of the inputs to the net . these nets are trained with particular normalization and when that gets screwed up it can really hurt it . phd d: i ' m doing what eric e eric coached me through then that part of it , so i ' m pretty confident in that . , the only slight difference is that i use normalization values that , andreas calculated from the original plp , which is right . n so , i u i do , we actually do n't do that normalization for the plp , do we ? for the st just the straight plp features ? phd c: so , there 's there is room for bugs that we might not have discovered , professor f: . i would actually double check with stephane at this point , cuz he 's probably the one here , he and dan are the ones who are at this point most experienced with the tandem thing and there may be some little bit here and there that is not being handled right . phd d: . it 's hard with features , cuz you what they should look like . , you ca n't just , like , print the values out in ascii and , look at them , see if they 're phd g: , and also they 're not , as i understand it , you do n't have a way to optimize the features for the final word error . , these are just discriminative , but they 're not , optimized for the final phd g: right . so it there 's always this question of whether you might do better with those features if there was a way to train it for the word error metric that you 're actually professor f: so wha w what an and you may not be in this case , come to think of it , because , you 're just taking something that 's trained up elsewhere . so , what you do in the full procedure is you , , have an embedded training . so you the net is trained on , a , viterbi alignment of the training data that comes from your full system . and so that 's where the feedback comes all around , so that it is actually discriminant . you can prove that it 's a , if you believe in the viterbi assumption that , getting the best path , is almost equivalent to getting the best , total probability , then you actually do improve that by , by training up on local , local frames . but , we are n't actually doing that here , because we did that for a hybrid system , and now we 're plugging it into another system and so it is n't i it would n't quite apply here . phd d: so another huge experiment we could do would be to take the tandem features , do sri forced alignments using those features , and then re - do the net with those . professor f: . another thing is since you 're not using the net for recognition per se but just for this transformation , it 's probably bigger than it needs to be . so that would save a lot of time . phd c: and there 's a mismatch in the phone sets . so , you 're using a l a long a larger phone set than what professor f: the other thing , just to mention that stephane this was an innovation of stephane 's , which was a pretty neat one , and might particularly apply here , given all these things we 're mentioning . , stephane 's idea was that , discriminant , approaches are great . even the local ones , given , these potential outer loops which , you can convince yourself turn into the global ones . , however , there 's times when it is not good . , when something about the test set is different enough from the training set that , the discrimination that you 're learning is not a good one . so , his idea was to take as the input feature vector to the , gaussian mixture system , a concatenation of the neural net outputs and the regular features . phd c: no , but we when i first started corresponding with dan about how to go about this , that was one of the things that we definitely went there . professor f: . , i ' m that stephane was n't the first to think of it , but actually stephane did it phd c: and do you do a klt transform on the con on the combined feature vector ? professor f: , actually , i , you should check with him , because he tried several different combinations . phd c: because you end up with this huge feature vector , so that might be a problem , a unless you do some form of dimensionality reduction . professor f: . i , th what i do n't remember is which came out best . so he did one where he put o put e the whole thing into one klt , and another one , since the plp things are already orthogonalized , he left them alone and just did a klt on the net outputs and then concatenated that . and i do n't remember which was better . phd d: did he did he try to ? so he always ended up with a feature vector that was twice as long as either one of the ? phd g: we need to close up cuz i need to save the data and , get a call . professor f: i g , given that we 're in a hurry for snacks , maybe we should do them together . phd g: should we just ? , are we trying to do them { nonvocalsound } in synchrony ? that might be fun . professor f: but we could just , see if we find a rhythm , what , o 's or zeroes , we wanna agree on that ? professor f: why do n't we do zer i anyone have a problem with saying zero ? is zero ok ? ###summary: the berkley meeting recorder project is well underway , and this meeting discusses the progress and ongoing issues. a pressing concern for the group is the darpa meeting in july , which is only a short time away , and for which they would like to have some progress. specifically , the group would like to have transcripts available , which would mean resolving legal issues for data use and on the basis of feedback from ibm get more transcription underway. additionally they would also like to have the question answering mock-up and transcriber interface ready for then. plp results for the front-end look good , with the group also reporting progress in segmentation: thilo's segmenter will now be used and ways of improving performance investigated; the classifier segmentation is progressing well , especially in the use of prosody for identifying interruption. work on the front end continues , with improvements of 3-5% being made. the group discussed how the digits should be recorded in the meeting. in the end they decided to record these in unison for all of the meeting participants as a whole. to improve the performance of thilo's automatic segmenter , this is going to be retrained and adapted to run with thilo's posteriors and speaker background models. regarding transcription , no new transcribers will be employed until situation regarding ibm is clarified. legal issues surrounding the approval and signing off of transcripts by participants has proved to be very complicated , and so will be sorted out off line by those involved by july. after finding discrepancies with the cmu researchers , the icsi group have decided to tune the size of their gaussian system. after raising the difficulty of checking for bugs in their generation of tandem features , they decide to check with stephane who has more experience of these procedures. for the darpa meeting in july , the group propose that they should have the question answering mock-up and transcriber interface ready for then , and also have data available. unfortunately , there are legal issues regarding the approval of transcripts. additionally , the group would like to have their data transcriptions in "production mode" by then. however the group do not want to hire more transcribers until ibm confirms in the next 2-3 weeks the acceptability of the data. segmentation for the recogniser has been done by hand which the group consider "cheating" , instead now they want to use thilo's automatic segmenter. the classifier segmentation work is going well , but needs more data to improve results since non-native speaker data cannot be used. for the front-end , so far the group have been using a high number of gaussians per cluster ( 64 ) rather than the ten per cluster used by researchers at cmu , therefore they need to tune their gaussian system to the feature vector. the group observed that it would be difficult to check for bugs in the generation of tandem features for the sri system. experimentation is taking place using different front-ends with the sri recogniser. this is not yet complete , but plp results are improving to match those of mfcc , with vocal tract length normalisation working "beautifully" on a training set of 24 hours , and giving overall improvement of between 3 and 5%. thilo's automatic segmenter is now working , and although it has low precision , this is mediated by the high recall. the group will send ibm another sample file to check that the beep problems are fixed , and this should take 2-3 weeks. progress on transcriptions has been made on 5 "set one" meetings , and two more transcribers set on. pre-segmentation has proved useful. meeting recorder data of the 62 hours of meetings already analysed has been organised into a spreadsheet with the aim to make this available over the www. classifier segmentation is expected to give better results from more data: currently "cheating" using word features for forced alignment , but looking to use other data such as "spurts". prosodaic features looking promising for identifying interruptions. generally the icsi data offers better pitch features and vowel voicing than the switchboard corpus due to the use of close talking mikes rather than telephone handsets.
17
grad h: , so if so if anyone has n't signed the consent form , do so . professor d: just just to be consistent , from here on in at least , that we 'll do it at the end . professor d: and right ? that was that was the point . so , i had asked actually anybody who had any ideas for an agenda to send it to me and no one did . so , professor d: right , s ok , so one item for an agenda is jane has some research to talk about , research issues . and , adam has some short research issues . professor d: , i have a list of things that were done over the last three months i was supposed to send off , and , i sent a note about it to adam and jane but i 'll just run through it also and see if someone thinks it 's inaccurate or insufficient . professor d: , to , ibm . they 're , so . , so , so , i 'll go through that . , and , anything else ? anyone wants to talk about ? professor d: cuz that 's cuz that was all about the , i chat with you about that off - line . that 's another thing . , and , anything else ? nothing else ? , there 's a , there is a , telephone call tomorrow , which will be a conference call that some of us are involved in for a possible proposal . , we 'll talk about it next week if something grad h: do you want me to be there for that ? i noticed you c ' ed me , but i was n't actually a recipient . i did n't quite to make of that . professor d: , we 'll talk about that after our meeting . , ok . so it sounds like the three main things that we have to talk about are , this list , jane and adam have some research items , and , other than that , anything , as usual , anything goes beyond that . ok , jane , since you were cut off last time why do n't we start with yours , make we get to it . postdoc f: but , same idea . so , if you ' ve looked at this you ' ve seen it before , so , as , part of the encoding includes a mark that indicates an overlap . it 's not indicated with , tight precision , it 's just indicated that ok , so , it 's indicated to so the people parts of sp which stretches of speech were in the clear , versus being overlapped by others . so , i used this mark and , divided the i wrote a script which divides things into individual minutes , of which we ended up with forty five , and a little bit . and , minute zero , is the first minute up to sixty seconds . postdoc f: and , what you can see is the number of overlaps and then to the right , whether they involve two speakers , three speakers , or more than three speakers . and , and , what i was looking for sp specifically was the question of whether they 're distributed evenly throughout or whether they 're bursts of them . and it looked to me as though , y this is just , this would this is not statistically verified , but it did look to me as though there are bursts throughout , rather than being localized to a particular region . the part down there , where there 's the maximum number of , overlaps is an area where we were discussing whether or not it would be useful to indi to s to code stress , sentence stress as possible indication of , information retrieval . so it 's like , rather , lively discussion there . professor d: what was what 's the parenthesized that says , like e the first one that says six overlaps and then two point eight ? postdoc f: so , six is , two point eight percent of the total number of overlaps in the session . postdoc f: at the very end , this is when people were , packing up to go , there 's this final , we i do n't remember where the digits fell . i 'd have to look at that . but the final three there are no overlaps . and couple times there are not . so , i it seems like it goes through bursts but , that 's it . now , another question is there are there individual differences in whether you 're likely to be overlapped with or to overlap with others . and , again i want to emphasize this is just one particular meeting , and also there 's been no statistical testing of it all , but i , i took the coding of the i , my i had this script figure out , who was the first speaker , who was the second speaker involved in a two - person overlap , i did n't look at the ones involving three or more . and , this is how it breaks down in the individual cells of who tended to be overlapping most often with who else , and if you look at the marginal totals , which is the ones on the right side and across the bottom , you get the totals for an individual . so , if you look at the bottom , those are the , numbers of overlaps in which adam was involved as the person doing the overlapping and if you look i ' m , but you 're o alphabetical , that 's why i ' m choosing you and then if you look across the right , then that 's where he was the person who was the sp first speaker in the pair and got overlap overlapped with by somebody . postdoc f: and , then if you look down in the summary table , then you see that , th they 're differences in whether a person got overlapped with or overlapped by . postdoc f: it would be good to normalize with respect to that . now on the table i did take one step toward , away from the raw frequencies by putting , percentages . so that the percentage of time of the times that a person spoke , what percentage , w so . of the times a person spoke and furthermore was involved in a two - person overlap , what percentage of the time were they the overlapper and what percent of the time were they th the overlappee ? and there , it looks like you see some differences , that some people tend to be overlapped with more often than they 're overlapped , but , i e this is just one meeting , there 's no statistical testing involved , and that would be required for a finding of any scientific reliability . professor d: s so , i it would be statistically incorrect to conclude from this that adam talked too much . postdoc f: that 's right . and i ' m , i ' m i do n't see a point of singling people out , postdoc f: , it 's like i ' m not saying on the tape who did better or worse postdoc f: because i do n't think that it 's i , and th here 's a case where , human subjects people would say be that you anonymize the results , and , so , might as do this . grad h: , when this is what this is actually when jane sent this email first , is what caused me to start thinking about anonymizing the data . postdoc f: , fair enough . fair enough . and actually , not about an individual , it 's the point about tendencies toward , different styles , different speaker styles . and it would be , there 's also the question of what type of overlap was this , and w what were they , and i and i know that distinguish at least three types and , probably more , the general cultural idea which w , the conversation analysts originally started with in the seventies was that we have this strict model where politeness involves that you let the person finish th before you start talking , and , w we know that an and they ' ve loosened up on that too s in the intervening time , that 's viewed as being a culturally - relative thing , that you have the high - involvement style from the east coast where people will overlap often as an indication of interest in what the other person is saying . and postdoc f: , exactly ! , there you go . fine , that 's alright , that 's ok . and and , in contrast , so deborah d and also deborah tannen 's thesis she talked about differences of these types , that they 're just different styles , and it 's you ca n't impose a model of there of the ideal being no overlaps , and , conversational analysts also agree with that , so it 's now , universally a ag with . and and , als , i ca n't say universally , but anyway , the people who used to say it was strict , now , do n't . they also , ack acknowledge the influence of sub of subcultural norms and cross - cultural norms and things . so , then it beco though so just superficially to give a couple ideas of the types of overlaps involved , i have at the bottom several that i noticed . so , there are backchannels , like what adam just did now and , anticipating the end of a question and simply answering it earlier , and there are several of those in this in these data where because we 're people who ' ve talked to each other , we the topic is , what the possibilities are and w and we ' ve spoken with each other so we the other person 's style is likely to be and so and t there are a number of places where someone just answered early . no problem . and places also which were interesting , where two or more people gave exactly th the same answer in unison different words but , the , everyone 's saying " yes " or , or ev even more sp specific than that . so , that , overlap 's not necessarily a bad thing and that it would be i m i useful to subdivide these further and see if there are individual differences in styles with respect to the types involved . and that 's all i wanted to say on that , unless people have questions . professor d: , th the biggest , result here , which is one we ' ve talked about many times and is n't new to us , but which would be interesting to show someone who is n't familiar with this is just the sheer number of overlaps . that that right ? that , professor d: here 's a relatively short meeting , it 's a forty plus minute meeting , and not only were there two hundred and fifteen overlaps but , there 's one minute there where there was n't any overlap ? phd a: it 'd be interesting to see what the total amount of time is in the overlaps , versus phd e: m i have n't averaged it now but , i will do the study of the with the program with the , the different , the , nnn , distribution of the duration of the overlaps . professor d: you ? ok , you don you do n't have a feeling for roughly how much it is ? phd e: mmm , because the , @ is @ . the duration is , the variation of the duration is , very big on the dat postdoc f: i suspect that it will also differ , depending on the type of overlap involved . phd e: because , on your surface a bit of zone of overlapping with the duration , overlapped and another very short . phd e: , i probably it 's very difficult to because the overlap is , on is only the in the final " s " of the fin the end word of the , previous speaker with the next word of the new speaker . , i considered that 's an overlap but it 's very short , it 's an " x " with a and the idea is probably , when , we studied th that zone , we h we have confusion with noise . with that fricative sounds , but i have new information but i have to study . phd g: you split this by minute , so if an overlap straddles the boundary between two minutes , that counts towards both of those minutes . postdoc f: yes . - . actually , actually not . , so le let 's think about the case where a starts speaking and then b overlaps with a , and then the minute boundary happens . and let 's say that after that minute boundary , b is still speaking , and a overlaps with b , that would be a new overlap . but otherwise , let 's say b comes to the conclusion of that turn without anyone overlapping with him or her , in which case there would be no overlap counted in that second minute . phd g: no , but suppose they both talk simultaneously both a portion of it is in minute one and another portion of minute two . postdoc f: ok . in that case , my c the coding that i was using since we have n't , incorporated adam 's , coding of overlap yets , the coding of , " yets " is not a word . since we have n't incorporated adam 's method of handling overl overlaps yet then that would have fallen through the cra cracks . it would be an underestimate of the number of overlaps because , i wou i would n't be able to pick it up from the way it was encoded so far . postdoc f: we just have n't done th the precise second to sec , second to second coding of when they occur . professor d: i ' m confused now . so l let me restate what andreas was saying and see . let 's say that in second fifty - seven of one minute , you start talking and i start talking and we ignore each other and keep on talking for six seconds . so we go over so we were talking over one another , and it 's just in each case , it 's just one interval . professor d: so , we talked over the minute boundary . is this considered as one overlap in each of the minutes , the way you have done this . postdoc f: no , it would n't . it would be considered as an overlap in the first one . professor d: ok , so that 's good , i , in the sense that andreas meant the question , postdoc f: i should also say i did a simplifying , count in that if a was speaking b overlapped with a and then a came back again and overlapped with b again , i did n't count that as a three - person overlap , i counted that as a two - person overlap , and it was a being overlapped with by d . because the idea was the first speaker had the floor and the second person started speaking and then the f the first person reasserted the floor thing . these are simplifying assumptions , did n't happen very often , there may be like three overlaps affected that way in the whole thing . grad h: cuz i i find it interesting that there were a large number of overlaps and they were all two - speaker . what i would have thought in is that when there were a large number of overlaps , it was because everyone was talking at once , but not . phd b: what 's really interesting though , it is before d saying " yes , meetings have a lot of overlaps " is to actually find out how many more we have than two - party . phd b: cuz in two - party conversations , like switchboard , there 's an awful lot too if you just look at backchannels , if you consider those overlaps ? it 's also ver it 's huge . it 's just that people have n't been looking at that because they ' ve been doing single - channel processing for speech recognition . phd b: so , the question is , how many more overlaps do you have of , say the two - person type , by adding more people . to a meeting , and it may be a lot more but i it may not be . professor d: , but see , i find it interesting even if it was n't any more , professor d: because since we were dealing with this full duplex thing in switchboard where it was just all separated out we just everything was just , phd b: , it 's not really " . it depends what you 're doing . so if you were actually having , depends what you 're doing , if right now we 're do we have individual mikes on the people in this meeting . so the question is , " are there really more overlaps happening than there would be in a two - person party " . and and there may be , but professor d: let let m let me rephrase what i ' m saying cuz i do n't think i ' m getting it across . what what i should n't use words like " because maybe that 's too i too imprecise . but what is that , in switchboard , despite the many other problems that we have , one problem that we 're not considering is overlap . and what we 're doing now is , aside from the many other differences in the task , we are considering overlap and one of the reasons that we 're considering it , one of them not all of them , one of them is that w at least , i ' m very interested in the scenario in which , both people talking are equally audible , and from a single microphone . and so , in that case , it does get mixed in , and it 's pretty hard to jus to just ignore it , to just do processing on one and not on the other . phd b: i agree that it 's an issue here but it 's also an issue for switchboard and if you think of meetings being recorded over the telephone , which , this whole point of studying meetings is n't just to have people in a room but to also have meetings over different phone lines . phd b: maybe far field mike people would n't be interested in that but all the dialogue issues still apply , so if each of us was calling and having a meeting that way you kn like a conference call . and , just the question is , y , in switchboard you would think that 's the simplest case of a meeting of more than one person , and i ' m wondering how much more overlap of the types that jane described happen with more people present . so it may be that having three people is very different from having two people or it may not be . professor d: that 's an important question to ask . what i ' m all i ' m s really saying is that i do n't think we were considering that in switchboard . phd b: there there 's actually to tell you the truth , the reason why it 's hard to measure is because of so , from the point of view of studying dialogue , which dan jurafsky and andreas and i had some projects on , you want to know the sequence of turns . so what happens is if you 're talking and i have a backchannel in the middle of your turn , and then you keep going what it looks like in a dialogue model is your turn and then my backchannel , even though my backchannel occurred completely inside your turn . phd b: so , for things like language modeling or dialogue modeling it 's we know that 's wrong in real time . but , because of the acoustic segmentations that were done and the fact that some of the acoustic data in switchboard were missing , people could n't study it , but that does n't mean in the real world that people do n't talk that way . so , it 's professor d: , i was n't saying that . i was just saying that w now we 're looking at it . professor d: and and , you maybe wanted to look at it before but , for these various technical reasons in terms of how the data was you were n't . professor d: so that 's why it 's coming to us as new even though it may be , if your hypothes the hypothesis you were offering right ? if it 's the null poth hypothesis , and if actually you have as much overlap in a two - person , we the answer to that . the reason we the answer to is cuz it was n't studied and it was n't studied because it was n't set up . right ? phd b: , all i meant is that if you 're asking the question from the point of view of what 's different about a meeting , studying meetings of , say , more than two people versus what kinds of questions you could ask with a two - person meeting . it 's important to distinguish that , this project is getting a lot of overlap but other projects were too , but we just could n't study them . and and so professor d: see , i le let me t , my point was just if you wanted to say to somebody , " what have we learned about overlaps here ? " just never mind comparison with something else , what we ' ve learned about is overlaps in this situation , is that the first - order thing i would say is that there 's a lot of them . in in the sense that i if you said if i professor d: in a way , i what i ' m comparing to is more the common sense notion of how much people overlap . the fact that when , adam was looking for a stretch of speech before , that did n't have any overlaps , and he w he was having such a hard time and now i look at this and i go , " , see why he was having such a hard time " . professor d: i ' m saying if i have this complicated thing in front of me , and we sh which , we 're gon na get much more sophisticated about when we get lots more data , but then , if i was gon na describe to somebody what did you learn right here , about , the modest amount of data that was analyzed i 'd say , " , the first - order thing was there was a lot of overlaps " . and it 's not just an overlap bunch of overlaps second - order thing is it 's not just a bunch of overlaps in one particular point , but that there 's overlaps , throughout the thing . phd b: i ' m just saying that it may the reason you get overlaps may or may not be due to the number of people in the meeting . and that 's all . phd b: and and it would actually be interesting to find out because some of the data say switchboard , which is n't exactly the same context , these are two people who each other and , but we should still be able to somehow say what is the added contra contribution to overlap time of each additional person , like that . postdoc f: and the reason is because there 's a limit there 's an upper bound on how many you can have , simply from the standpoint of audibility . when we speak we do make a judgment of " can " , as adults . , children do n't adjust so , if a truck goes rolling past , adults will , depending , but mostly , adults will hold off to what to finish the end of the sentence till the noise is past . and we generally do monitor things like that , about whether we whether our utterance will be in the clear or not . and partly it 's related to rhythmic structure in conversation , so , you t , this is d also , people tend to time their , when they come into the conversation based on the overall rhythmic , ambient thing . so you do n't want to be c cross - cutting . and and , just to finish this , that that that there may be an upper bound on how many overlaps you can have , simply from the standpoint of audibility and how loud the other people are who are already in the fray . but i , of certain types . now if it 's just backchannels , people may be doing that with less intention of being heard , just spontaneously doing backchannels , in which case that those might there may be no upper bound on those . phd g: i have a feeling that backchannels , which are the vast majority of overlaps in switchboard , do n't play as big a role here , because it 's very unnatural , to backchannel if in a multi - audience , in a multi - person audience . phd b: if you can see them , actually . it 's interesting , so if you watch people are going like right right , like this here , phd g: but but , it 's odd if one person 's speaking and everybody 's listening , and it 's unusual to have everybody going " - , - " professor d: actually , i ' ve done it a fair number of times today . but . phd g: plus plus the so so actually , that 's in part because the nodding , if you have visual contact , the nodding has the same function , but on the phone , in switchboard you that would n't work . so so you need to use the backchannel . phd a: so , in the two - person conversations , when there 's backchannel , is there a great deal of overlap in the speech ? grad h: that is an earphone , so if you just put it so it 's on your ear . phd a: it 's hard to do both , ? no , when there 's backchannel , just i was just listening , and when there 's two people talking and there 's backchannel it seems like , the backchannel happens when , the pitch drops and the first person and a lot of times , the first person actually stops talking and then there 's a backchannel and then they start up again , and so i ' m wondering about h wonder how much overlap there is . is there a lot ? phd b: there 's a lot of the kind that jose was talking about , where , this is called " precision timing " in conversation analysis , where they come in overlapping , but at a point where the information is mostly complete . so all you 're missing is some last syllables or the last word or some highly predictable words . phd b: but , from information flow point of view it 's not an overlap in the predictable information . phd b: , that 's exactly , exactly why we wanted to study the precise timing of overlaps ins in switchboard , say , because there 's a lot of that . phd g: so so here 's a first interesting labeling task . , to distinguish between , say , backchannels precision timing , benevolent overlaps , and w and , i , hostile overlaps , where someone is trying to grab the floor from someone else . postdoc f: , you could do that . i ju that in this meeting i really had the feeling that was n't happening , that the hostile type . these were these were benevolent types , as people finishing each other 's sentences , and . phd g: ok . , i could imagine that as there 's a fair number of cases where , and this is , not really hostile , but competitive , where one person is finishing something and you have , like , two or three people jumping trying to , grab the next turn . phd g: and so it 's not against the person who talks first because actually we 're all waiting for that person to finish . but they all want to be next . professor d: i have a feeling most of these things are that are not a benevolent kind are , are competitive as opposed to real really hostile . professor d: , o one thing i wanted to or you can tell a good joke and then everybody 's laughing and you get a chance to g break in . professor d: , the other thing i was thinking was that , these all these interesting questions are , pretty hard to answer with , u , a small amount of data . professor d: so , i wonder if what you 're saying suggests that we should make a conscious attempt to have , a fair number of meetings with , a smaller number of people . we most of our meetings are , meetings currently with say five , six , seven , eight people should we really try to have some two - person meetings , or some three - person meetings and re record them just to beef up the statistics on that ? postdoc f: that 's a control . , it seems like there are two possibilities there , i it seems like if you have just two people it 's not really , y like a meeting , w is not as similar as the rest of the sample . it depends on what you 're after , but it seems like that would be more a case of the control condition , compared to , an experimental condition , with more than two . professor d: , liz was raising the question of whether i it 's the number there 's a relationship between the number of people and the number of overlaps or type of overlaps there , and , if you had two people meeting in this circumstance then you 'd still have the visuals . you would n't have that difference also that you have in the say , in switchboard data . postdoc f: , i ' m just thinking that 'd be more like a c control condition . phd g: if if the goal were to just look at overlap you would you could serve yourself save yourself a lot of time but not even transcri transcribe the words . phd b: , i was thinking you should be able to do this from the acoustics , on the close - talking mikes , phd b: right . , not as what , you would n't be able to have any typology , but you 'd get some rough statistics . professor d: but what do you think about that ? do you think that would be useful ? i ' m just thinking that as an action item of whether we should try to record some two - person meetings . phd b: i my first comment was , only that we should n not attribute overlaps only to meetings , but maybe that 's obvious , maybe everybody knew that , but that in normal conversation with two people there 's an awful lot of the same kinds of overlap , and that it would be interesting to look at whether there are these kinds of constraints that jane mentioned , that what maybe the additional people add to this competition that happens right after a turn , because now you can have five people trying to grab the turn , but pretty quickly there 're they back off and you go back to this only one person at a time with one person interrupting at a time . so , i . to answer your question i it i do n't think it 's crucial to have controls but it 's worth recording all the meetings we can . phd b: d i would n't not record a two - person meeting just because it only has two people . phd g: could we , we have in the past and continue will continue to have a fair number of phone conference calls . and , and as a to , as another c comparison condition , we could see what happens in terms of overlap , when you do n't have visual contact . grad h: it just seems like that 's a very different thing than what we 're doing . phd g: or , this is getting a little extravagant , we could put up some blinds to remove , visual contact . phd b: that 's what they did on map task , this map task corpus ? they ran exactly the same pairs of people with and without visual cues and it 's quite interesting . professor d: , we record this meeting so regularly it would n't be that a little strange . professor d: , th that was the other thing , were n't we gon na take a picture at the beginning of each of these meetings ? grad h: , what i had thought we were gon na do is just take pictures of the whiteboards . rather than take pictures of the meeting . postdoc f: linguistic anthropologists would suggest it would be useful to also take a picture of the meeting . postdoc f: the because you get then the spatial relationship of the speakers . and that could be phd g: , you could do that by just noting on the enrollment sheet the seat number . grad h: seat number , that 's a good idea . i 'll do that on the next set of forms . grad h: the wireless ones . and even the jacks , i ' m sitting here and the jack is over in front of you . grad h: so i ' m gon na put little labels on all the chairs with the seat number . that 's a good idea . postdoc f: but , they the s the linguistic anthropologists would say it would be good to have a digital picture anyway , postdoc f: because you get a sense also of posture . posture , and we could like , block out the person 's face or whatever professor d: but if you just f but from one picture , i that you really get that . postdoc f: it 'd be better than nothing , is i just from a single picture you can tell some aspects . postdoc f: i could tell you , if i ' m in certain meetings i notice that there are certain people who really do the body language is very interesting in terms of the dominance aspect . postdoc f: . , you black out the that part . but it 's just , the body grad h: , the where we sit at the table , i find is very interesting , that we do tend to cong to gravitate to the same place each time . and it 's somewhat coincidental . i ' m sitting here so that run into the room if the hardware starts , catching fire . phd g: , no , you just like to be in charge , that 's why you 're sitting grad h: , i ' ve been playing with , using the close - talking mike to do to try to figure out who 's speaking . so my first attempt was just using thresholding and filtering , that we talked about two weeks ago , and so i played with that a little bit , and it works o k , except that it 's very sensitive to your choice of your filter width and your threshold . so if you fiddle around with it a little bit and you get good numbers you can actually do a pretty good job of segmenting when someone 's talking and when they 're not . but if you try to use the same paramenters on another speaker , it does n't work anymore , even if you normalize it based on the absolute loudness . grad h: the algorithm was , take o every frame that 's over the threshold , and then median - filter it , and then look for runs . so there was a minimum run length , grad h: so you take a each frame , and you compute the energy and if it 's over the threshold you set it to one , and if it 's under the threshold you set it to zero , so now you have a bit stream of zeros and ones . and then i median - filtered that using , a fairly long filter length . , actually i depends on what you mean by long , tenth of a second sorts of numbers . and that 's to average out , pitch , the pitch contours , and things like that . and then , looked for long runs . grad h: and that works o k , if you fil if you tune the filter parameters , if you tune how long your median filter is and how high you 're looking for your thresholds . grad h: i certainly could though . but this was just i had the program mostly written already so it was easy to do . ok and then the other thing i did , was i took javier 's speaker - change detector acoustic - change detector , and i implemented that with the close - talking mikes , and unfortunately that 's not working real , and it looks like it 's the problem is he does it in two passes , the first pass is to find candidate places to do a break . and he does that using a neural net doing broad phone classification and he has the , one of the phone classes is silence . and so the possible breaks are where silence starts and ends . and then he has a second pass which is a modeling a gaussian mixture model . looking for whether it improves or degrades to split at one of those particular places . and what looks like it 's happening is that the even on the close - talking mike the broad phone class classifier 's doing a really bad job . grad h: , i have no idea . i do n't remember . does an do you remember , morgan , was it broadcast news ? grad h: so , at any rate , my next attempt , which i ' m in the midst of and have n't quite finished yet was actually using the , thresholding as the way of generating the candidates . because one of the things that definitely happens is if you put the threshold low you get lots of breaks . all of which are definitely acoustic events . they 're definitely someone talking . but , like , it could be someone who is n't the person here , but the person over there or it can be the person breathing . and then feeding that into the acoustic change detector . and so that might work . but , i have n't gotten very far on that . but all of this is close - talking mike , so it 's , just trying to get some ground truth . phd e: only with , but i , when , y i saw the speech from pda and , close talker . the there is a great difference in the signal . phd e: but i that in the mixed file you can find , zone with , great different , level of energy . phd e: for , algorithm based on energy , that h mmm , more or less , like , mmm , first sound energy detector . phd e: nnn . when y you the detect the first at the end of the detector of , ehm princ . what is the name in english ? the , mmm , the de detector of , ehm of a word in the s in an isolated word in the background that , phd e: it 's probably to work , because , you have , in the mixed files a great level of energy . and great difference between the sp speaker . and probably is not so easy when you use the pda , that because the signal is , the in the e energy level . in that , speech file is , more similar . between the different , speaker , is , it will i is my opinion . phd e: it will be , more difficult to detect bass - tone energy . the change . that , phd e: and the another question , that when i review the work of javier . the , nnn , that the idea of using a neural network to get a broad class of phonetic , from , a candidate from the speech signal . if you have , i ' m considering , only because javier , only consider , like candidate , the , nnn , the silence , because it is the only model , he used that , nnn , to detect the possibility of a change between the speaker , another research thing , different groups , working , on broadcast news prefer to , to consider hypothesis between each phoneme . phd e: because , i it 's more realistic that , only consider the silence between the speaker . there exists silence between , a speaker . is , acoustic , event , important to consider . i found that the , silence in many occasions in the speech file , but , when you have , two speakers together without enough silence between them , is better to use the acoustic change detector and i ix or , mmm , bic criterion for consider all the frames in my opinion . professor d: , the reason that he , just used silence was not because he thought it was better , it was the place he was starting . so , he was trying to get something going , and , e , as is in your case , if you 're here for only a modest number of months you try to pick a realistic goal , professor d: but his goal was always to proceed from there to then allow broad category change also . phd e: but , do you think that if you consider all the frames to apply the , the bic criterion to detect the different acoustic change , between speaker , without , with , silence or with overlapping , like , a general , way of process the acoustic change . in a first step , . an - and then , without considering the you , you can consider the energy like a another parameter in the feature vector , this this is the idea . and if , if you do that , with a bic criterion , or with another , of distance in a first step , and then you , you get the , the hypothesis to the this change acoustic , to po process because , probably you can find the a small gap of silence between speaker with a ga mmm , small duration less than , two hundred milliseconds and apply another algorithm , another approach like , detector of ene , detector of bass - tone energy to consider that , zone . of s a small silence between speaker , or another algorithm to process , the segment between marks founded by the bic criterion and applied for each frame . is , nnn , it will be a an a more general approach the if we compare with use , a neural net or another , speech recognizer with a broad class or narrow class , because , in my opinion it 's in my opinion , if you change the condition of the speech , if you adjust to your algorithm with a mixed speech file and to , adapt the neural net , used by javier with a mixed file . with a m mixed file , phd e: and and then you , you try to apply that , , speech recognizer to that signal , to the pda , speech file , you will have problems , because the condition you will need t i suppose that you will need to retrain it . professor d: u look , this is a one once it 's a i used to work , like , on voiced on voice silence detection , and this is this thing . professor d: if you have somebody who has some experience with this thing , and they work on it for a couple months , they can come up with something that gets most of the cases fairly easily . then you say , " ok , i do n't just wanna get most of the cases i want it to be really accurate . " then it gets really hard no matter what you do . so , the p the problem is that if you say , " i have these other data over here , that i learn things from , either explicit training of neural nets or of gaussian mixture models or whatever . " suppose you do n't use any of those things . you say you have looked for acoustic change . , what does that mean ? that that means you set some thresholds somewhere , and so where do you get your thresholds from ? from something that you looked at . so you always have this problem , you 're going to new data h how are you going to adapt whatever you can very quickly learn about the new data ? , if it 's gon na be different from old data that you have ? and that 's a problem with this . grad h: , also what i ' m doing right now is not intended to be an acoustic change detector for far - field mikes . what i ' m doing is trying to use the close - talking mike and just use can - and just generate candidate and just try to get a first pass at something that works . grad h: and i have n't spent a lot of time on it and i ' m not intending to spend a lot of time on it . phd g: but , imagine building a model of speaker change detection that takes into account both the far - field and the actually , not just the close - talking mike for that speaker , but actually for all of th for all of the speakers . phd g: if you model the effect that me speaking has on your microphone and everybody else 's microphone , as on that , and you build , you 'd you would build a an hmm that has as a state space all of the possible speaker combinations professor d: but actually , andreas may maybe just something simpler but along the lines of what you 're saying , professor d: i was just realizing , i used to know this guy who used to build , mike mixers automatic mike mixers where , t in order to able to turn up the gain , as much as you can , you lower the gain on the mikes of people who are n't talking , professor d: and then he had some reasonable way of doing that , but , what if you were just looking at very simple measures like energy measures but you do n't just compare it to some threshold overall but you compare it to the energy in the other microphones . grad h: i was thinking about doing that originally to find out who 's the loudest , and that person is certainly talking . but i also wanted to find threshold , excuse me , mol overlap . so , not just the loudest . phd e: but , i . i have found that when i analyzed the speech files from the , mike , from the close microphone , i found zones with a different level of energy . grad h: could you fill that out anyway ? just , put your name in . are y you want me to do it ? i 'll do it . phd e: including overlap zone . including . because , depend on the position of the microph of the each speaker to , to get more o or less energy i in the mixed sign in the signal . and then , if you consider energy to detect overlapping in , and you process the in the speech file from the mixed signals . the mixed signals , . it 's difficult , only to en with energy to consider that in that zone we have , overlapping zone , if you process only the energy of the , of each frame . professor d: , it 's probably harder , but what i was s nnn noting just when he when andreas raised that , was that there 's other information to be gained from looking of the microphones and you may not need to look at very sophisticated things , because if there 's if most of the overlaps , this does n't cover , say , three , but if most of the overlaps , say , are two , if the distribution looks like there 's a couple high ones and the rest of them are low , professor d: , what , there 's some information there about their distribution even with very simple measures . , i had an idea with while i was watching chuck nodding at a lot of these things , is that we can all wear little bells on our heads , so that then you 'd know that phd a: actually , i saw a woman at the bus stop the other day who , was talking on her cell phone speaking japanese , and was bowing . , profusely . phd b: it 's very difficult if you try while you 're trying , say , to convince somebody on the phone it 's difficult not to move your hands . not , if you watch people they 'll actually do these things . i still think we should try a meeting or two with the blindfolds , at least of this meeting that we have lots of recordings of , maybe for part of the meeting , we do n't have to do it the whole meeting . phd b: that could be fun . it 'll be too hard to make barriers , i was thinking because they have to go all the way postdoc f: actually also say i made barr barriers for so that the i was doing with collin wha which just used , this foam board . postdoc f: r really inexpensive . you can you can masking tape it together , these are , pretty l large partitions . phd b: but then we also have these mikes , is the other thing i was thinking , so we need a barrier that does n't disturb the sound , phd b: , it sounds weird but it 's cheap and , be interesting to have the camera going . postdoc f: we 're going to have to work on the , on the human subjects form . professor d: that 's the one that we videotape . , i wanna move this along . i did have this other agenda item which is , @ it 's a list which i sent to a couple folks , but i wanted to get broader input on it , so this is the things that we did in the last three months not everything we did but highlights that tell s some outside person , what were you actually working on . in no particular order , one , ten more hours of meeting r meetings recorded , something like that , from , three months ago . xml formats and other transcription aspects sorted out and sent to ibm . , pilot data put together and sent to ibm for transcription , next batch of recorded data put together on the cd - roms for shipment to ibm , professor d: but , that 's why i phrased it that way , ok . human subjects approval on campus , and release forms worked out so the meeting participants have a chance to request audio pixelization of selected parts of the spee their speech . audio pixelization software written and tested . preliminary analysis of overlaps in the pilot data we have transcribed , and exploratory analysis of long - distance inferences for topic coherence , that was i was was n't if those were the right way that was the right way to describe that because of that little exercise that you and lokendra did . professor d: i , i ' m probably saying this wrong , but what i said was exploratory analysis of long - distance inferences for topic coherence . professor d: something like that . so , i a lot of that was from , what you two were doing so i sent it to you , and , mail me , the corrections or suggestions for changing i do n't want to make this twice it 's length but , just i m improve it . is there anything anybody professor d: bunch of for s ok , maybe send me a sentence that 's a little thought through about that . grad h: so , ok , i 'll send you a sentence that does n't just say " a bunch of " ? professor d: range of things , . and , i threw in what you did with what jane did on in under the , preliminary analysis of overlaps . thilo , can you tell us about all the work you ' ve done on this project in the last , last three months ? phd c: , i did n't get it . wh - what is " audio pixelization " ? professor d: , audio pix wh he did it , so why do n't you explain it quickly ? grad h: it 's just , beeping out parts that you do n't want included in the meeting so , you can say things like , " , this should probably not be on the record , but beep " professor d: we we spent a fair amount of time early on just talk dealing with this issue about op w e we realized , " , people are speaking in an impromptu way and they might say something that would embarrass them or others later " , and , how do you get around that so in the consent form it says , you we will look at the transcripts later and if there 's something that you 're unhappy with , . professor d: but you do n't want to just excise it because , you have to be careful about excising it , how you excise it keeping the timing right and so that at the moment tho th the idea we 're running with is h putting the beep over it . grad h: , you can either beep or it can be silence . i could n't decide . which was the right way to do it . beep is good auditorily , if someone is listening to it , there 's no mistake that it 's been beeped out , but for software it 's probably better for it to be silence . phd a: no , no . you can , you could make a m as long as you keep using the same beep , people could make a model of that beep , and postdoc f: it 's very clear . then you do n't think it 's a long pause . phd a: , it 's more obvious that there was something there than if there 's just silence . phd a: yea - right . right . but if you just replaced it with silence , it 's not clear whether that 's really silence or postdoc f: one one question . do you do it on all channels ? interesting . i like that . , i like that . grad h: , i have n't thrown away any of the meetings that i beeped . actually yours is the only one that i beeped and then , the ar darpa meeting . grad h: , and then the darpa meeting excised completely , so it 's in a private directory . postdoc f: i have one concept a t i want to say , which is that it 's that you 're preserving the time relations , s so you 're not just cutting you 're not doing scissor snips . you 're you 're keeping the , the time duration of a de - deleted part . grad h: , since we wanna possibly synchronize these things as . , i should have done that . phd b: so i if there 's an overlap , like , if i ' m saying something that 's bleepable and somebody else overlaps during it they also get bleeped , too ? professor d: i d i did before we do the digits , i did also wanna remind people , do send me , thoughts for an agenda , grad h: can you do it for them ? " and , no actually , you ca n't . phd a: actually actually that 's what you were giving us was another meeting and i was like , " , ok ! " phd b: how long does it take , just briefly , like t to ok . to label the , postdoc f: no . i have the script now , so , it can work off the , other thing , postdoc f: but , . because , once his algorithm is up and running then we can do it that way . grad h: if it works enough . right now it 's not . not quite to the point where it works . postdoc f: appreciate that . what i what this has , caused me so this discussion caused me to wanna subdivide these further . i ' m gon na take a look at the , backchannels , how much we have anal i hope to have that for next time . grad h: , my algorithm worked great actually on these , but when you wear it like that or with the , lapel or if you have it very far from your face , that 's when it starts failing . grad h: it 's so , it was just a comment on the software , not a comment on prescriptions on how you wear microphones . postdoc f: do you want us to put a mark on the bottom of these when they ' ve actually been read , or do you just i the only one that was n't read is known , so we do n't do it .
the berkeley meeting recorder group focussed its discussion on overlapping speech segments. speaker fe008 presented raw counts and percentages for one transcribed meeting , revealing a large number of overlaps throughout the 40-plus-minute transcript. efforts by speakers fe008 and fe016 are in progress to categorize and subcategorize types of overlapping speech and evaluate the contribution of multiple speakers in an interaction to the amount and types of overlap observed. speaker me011 described his attempts to automatically identify speakers via the close-talking microphone channels using thresholding and filtering methods and an existing speaker-change detection algorithm. the group also tentatively discussed the erection of visual barriers during meeting recordings , and speaker me013 presented a list of work performed by bmr over the previous three months to be included in a forthcoming report to ibm. for future meetings , speaker me011 will generate a system for mapping speakers and their positions in the recording room. speaker fe008 will analyze backchannels for a subset of meeting data and givee a report in the next meeting. for language and dialogue modelling , current methods of marking and segmenting overlap are abstracted from real time , as individual speaker turns are indicated sequentially. a large amount of data must be collected to address research questions concerning overlapping speech. for automatic speaker identification , thresholding and filtering methods are sensitive regarding the particular filter width and threshold selected. while such parameters can be finely tuned for one speaker to achieve good results , extending the same parameters to another speaker is problematic. the broad phone classifier of the speaker-change detector is peforming poorly. the prospect of erecting visual barriers during meetings would require partitioning off each of the participants. also , barriers that do not affect the overall room acoustics would be required. efforts are in progress to mark where regions of speaker overlap occur in meeting transcripts and note the number of speakers involved. such information is currently being encoded within relatively loose time boundaries. a large number of overlapping speech regions were identified throughout one recorded meeting , wherein overlaps were found to occur in bursts , rather than being evenly distributed throughout the meeting. a cursory analysis was done on regions of overlap involving two speakers to determine whether speakers are more likely to be overlapped with or to cause overlap with other speakers. attempts were also made to classify types of speaker overlap---e.g . backchannels , answering questions as they are being asked , and responding in unison---with future work focussed on subcategorizing types of backchannels. speaker fe016 is interested in the contribution of multiple speakers in an interaction to the amount and types of overlap observed , and comparing this to findings from the switchboard corpus. future work includes generating predictive models of overlap , and the tentative erection of visual barriers during meeting recordings. speaker me011 described his attempts to automatically identify speakers via the close-talking microphone channels using thresholding and filtering methods and an existing speaker-change detection algorithm.
###dialogue: grad h: , so if so if anyone has n't signed the consent form , do so . professor d: just just to be consistent , from here on in at least , that we 'll do it at the end . professor d: and right ? that was that was the point . so , i had asked actually anybody who had any ideas for an agenda to send it to me and no one did . so , professor d: right , s ok , so one item for an agenda is jane has some research to talk about , research issues . and , adam has some short research issues . professor d: , i have a list of things that were done over the last three months i was supposed to send off , and , i sent a note about it to adam and jane but i 'll just run through it also and see if someone thinks it 's inaccurate or insufficient . professor d: , to , ibm . they 're , so . , so , so , i 'll go through that . , and , anything else ? anyone wants to talk about ? professor d: cuz that 's cuz that was all about the , i chat with you about that off - line . that 's another thing . , and , anything else ? nothing else ? , there 's a , there is a , telephone call tomorrow , which will be a conference call that some of us are involved in for a possible proposal . , we 'll talk about it next week if something grad h: do you want me to be there for that ? i noticed you c ' ed me , but i was n't actually a recipient . i did n't quite to make of that . professor d: , we 'll talk about that after our meeting . , ok . so it sounds like the three main things that we have to talk about are , this list , jane and adam have some research items , and , other than that , anything , as usual , anything goes beyond that . ok , jane , since you were cut off last time why do n't we start with yours , make we get to it . postdoc f: but , same idea . so , if you ' ve looked at this you ' ve seen it before , so , as , part of the encoding includes a mark that indicates an overlap . it 's not indicated with , tight precision , it 's just indicated that ok , so , it 's indicated to so the people parts of sp which stretches of speech were in the clear , versus being overlapped by others . so , i used this mark and , divided the i wrote a script which divides things into individual minutes , of which we ended up with forty five , and a little bit . and , minute zero , is the first minute up to sixty seconds . postdoc f: and , what you can see is the number of overlaps and then to the right , whether they involve two speakers , three speakers , or more than three speakers . and , and , what i was looking for sp specifically was the question of whether they 're distributed evenly throughout or whether they 're bursts of them . and it looked to me as though , y this is just , this would this is not statistically verified , but it did look to me as though there are bursts throughout , rather than being localized to a particular region . the part down there , where there 's the maximum number of , overlaps is an area where we were discussing whether or not it would be useful to indi to s to code stress , sentence stress as possible indication of , information retrieval . so it 's like , rather , lively discussion there . professor d: what was what 's the parenthesized that says , like e the first one that says six overlaps and then two point eight ? postdoc f: so , six is , two point eight percent of the total number of overlaps in the session . postdoc f: at the very end , this is when people were , packing up to go , there 's this final , we i do n't remember where the digits fell . i 'd have to look at that . but the final three there are no overlaps . and couple times there are not . so , i it seems like it goes through bursts but , that 's it . now , another question is there are there individual differences in whether you 're likely to be overlapped with or to overlap with others . and , again i want to emphasize this is just one particular meeting , and also there 's been no statistical testing of it all , but i , i took the coding of the i , my i had this script figure out , who was the first speaker , who was the second speaker involved in a two - person overlap , i did n't look at the ones involving three or more . and , this is how it breaks down in the individual cells of who tended to be overlapping most often with who else , and if you look at the marginal totals , which is the ones on the right side and across the bottom , you get the totals for an individual . so , if you look at the bottom , those are the , numbers of overlaps in which adam was involved as the person doing the overlapping and if you look i ' m , but you 're o alphabetical , that 's why i ' m choosing you and then if you look across the right , then that 's where he was the person who was the sp first speaker in the pair and got overlap overlapped with by somebody . postdoc f: and , then if you look down in the summary table , then you see that , th they 're differences in whether a person got overlapped with or overlapped by . postdoc f: it would be good to normalize with respect to that . now on the table i did take one step toward , away from the raw frequencies by putting , percentages . so that the percentage of time of the times that a person spoke , what percentage , w so . of the times a person spoke and furthermore was involved in a two - person overlap , what percentage of the time were they the overlapper and what percent of the time were they th the overlappee ? and there , it looks like you see some differences , that some people tend to be overlapped with more often than they 're overlapped , but , i e this is just one meeting , there 's no statistical testing involved , and that would be required for a finding of any scientific reliability . professor d: s so , i it would be statistically incorrect to conclude from this that adam talked too much . postdoc f: that 's right . and i ' m , i ' m i do n't see a point of singling people out , postdoc f: , it 's like i ' m not saying on the tape who did better or worse postdoc f: because i do n't think that it 's i , and th here 's a case where , human subjects people would say be that you anonymize the results , and , so , might as do this . grad h: , when this is what this is actually when jane sent this email first , is what caused me to start thinking about anonymizing the data . postdoc f: , fair enough . fair enough . and actually , not about an individual , it 's the point about tendencies toward , different styles , different speaker styles . and it would be , there 's also the question of what type of overlap was this , and w what were they , and i and i know that distinguish at least three types and , probably more , the general cultural idea which w , the conversation analysts originally started with in the seventies was that we have this strict model where politeness involves that you let the person finish th before you start talking , and , w we know that an and they ' ve loosened up on that too s in the intervening time , that 's viewed as being a culturally - relative thing , that you have the high - involvement style from the east coast where people will overlap often as an indication of interest in what the other person is saying . and postdoc f: , exactly ! , there you go . fine , that 's alright , that 's ok . and and , in contrast , so deborah d and also deborah tannen 's thesis she talked about differences of these types , that they 're just different styles , and it 's you ca n't impose a model of there of the ideal being no overlaps , and , conversational analysts also agree with that , so it 's now , universally a ag with . and and , als , i ca n't say universally , but anyway , the people who used to say it was strict , now , do n't . they also , ack acknowledge the influence of sub of subcultural norms and cross - cultural norms and things . so , then it beco though so just superficially to give a couple ideas of the types of overlaps involved , i have at the bottom several that i noticed . so , there are backchannels , like what adam just did now and , anticipating the end of a question and simply answering it earlier , and there are several of those in this in these data where because we 're people who ' ve talked to each other , we the topic is , what the possibilities are and w and we ' ve spoken with each other so we the other person 's style is likely to be and so and t there are a number of places where someone just answered early . no problem . and places also which were interesting , where two or more people gave exactly th the same answer in unison different words but , the , everyone 's saying " yes " or , or ev even more sp specific than that . so , that , overlap 's not necessarily a bad thing and that it would be i m i useful to subdivide these further and see if there are individual differences in styles with respect to the types involved . and that 's all i wanted to say on that , unless people have questions . professor d: , th the biggest , result here , which is one we ' ve talked about many times and is n't new to us , but which would be interesting to show someone who is n't familiar with this is just the sheer number of overlaps . that that right ? that , professor d: here 's a relatively short meeting , it 's a forty plus minute meeting , and not only were there two hundred and fifteen overlaps but , there 's one minute there where there was n't any overlap ? phd a: it 'd be interesting to see what the total amount of time is in the overlaps , versus phd e: m i have n't averaged it now but , i will do the study of the with the program with the , the different , the , nnn , distribution of the duration of the overlaps . professor d: you ? ok , you don you do n't have a feeling for roughly how much it is ? phd e: mmm , because the , @ is @ . the duration is , the variation of the duration is , very big on the dat postdoc f: i suspect that it will also differ , depending on the type of overlap involved . phd e: because , on your surface a bit of zone of overlapping with the duration , overlapped and another very short . phd e: , i probably it 's very difficult to because the overlap is , on is only the in the final " s " of the fin the end word of the , previous speaker with the next word of the new speaker . , i considered that 's an overlap but it 's very short , it 's an " x " with a and the idea is probably , when , we studied th that zone , we h we have confusion with noise . with that fricative sounds , but i have new information but i have to study . phd g: you split this by minute , so if an overlap straddles the boundary between two minutes , that counts towards both of those minutes . postdoc f: yes . - . actually , actually not . , so le let 's think about the case where a starts speaking and then b overlaps with a , and then the minute boundary happens . and let 's say that after that minute boundary , b is still speaking , and a overlaps with b , that would be a new overlap . but otherwise , let 's say b comes to the conclusion of that turn without anyone overlapping with him or her , in which case there would be no overlap counted in that second minute . phd g: no , but suppose they both talk simultaneously both a portion of it is in minute one and another portion of minute two . postdoc f: ok . in that case , my c the coding that i was using since we have n't , incorporated adam 's , coding of overlap yets , the coding of , " yets " is not a word . since we have n't incorporated adam 's method of handling overl overlaps yet then that would have fallen through the cra cracks . it would be an underestimate of the number of overlaps because , i wou i would n't be able to pick it up from the way it was encoded so far . postdoc f: we just have n't done th the precise second to sec , second to second coding of when they occur . professor d: i ' m confused now . so l let me restate what andreas was saying and see . let 's say that in second fifty - seven of one minute , you start talking and i start talking and we ignore each other and keep on talking for six seconds . so we go over so we were talking over one another , and it 's just in each case , it 's just one interval . professor d: so , we talked over the minute boundary . is this considered as one overlap in each of the minutes , the way you have done this . postdoc f: no , it would n't . it would be considered as an overlap in the first one . professor d: ok , so that 's good , i , in the sense that andreas meant the question , postdoc f: i should also say i did a simplifying , count in that if a was speaking b overlapped with a and then a came back again and overlapped with b again , i did n't count that as a three - person overlap , i counted that as a two - person overlap , and it was a being overlapped with by d . because the idea was the first speaker had the floor and the second person started speaking and then the f the first person reasserted the floor thing . these are simplifying assumptions , did n't happen very often , there may be like three overlaps affected that way in the whole thing . grad h: cuz i i find it interesting that there were a large number of overlaps and they were all two - speaker . what i would have thought in is that when there were a large number of overlaps , it was because everyone was talking at once , but not . phd b: what 's really interesting though , it is before d saying " yes , meetings have a lot of overlaps " is to actually find out how many more we have than two - party . phd b: cuz in two - party conversations , like switchboard , there 's an awful lot too if you just look at backchannels , if you consider those overlaps ? it 's also ver it 's huge . it 's just that people have n't been looking at that because they ' ve been doing single - channel processing for speech recognition . phd b: so , the question is , how many more overlaps do you have of , say the two - person type , by adding more people . to a meeting , and it may be a lot more but i it may not be . professor d: , but see , i find it interesting even if it was n't any more , professor d: because since we were dealing with this full duplex thing in switchboard where it was just all separated out we just everything was just , phd b: , it 's not really " . it depends what you 're doing . so if you were actually having , depends what you 're doing , if right now we 're do we have individual mikes on the people in this meeting . so the question is , " are there really more overlaps happening than there would be in a two - person party " . and and there may be , but professor d: let let m let me rephrase what i ' m saying cuz i do n't think i ' m getting it across . what what i should n't use words like " because maybe that 's too i too imprecise . but what is that , in switchboard , despite the many other problems that we have , one problem that we 're not considering is overlap . and what we 're doing now is , aside from the many other differences in the task , we are considering overlap and one of the reasons that we 're considering it , one of them not all of them , one of them is that w at least , i ' m very interested in the scenario in which , both people talking are equally audible , and from a single microphone . and so , in that case , it does get mixed in , and it 's pretty hard to jus to just ignore it , to just do processing on one and not on the other . phd b: i agree that it 's an issue here but it 's also an issue for switchboard and if you think of meetings being recorded over the telephone , which , this whole point of studying meetings is n't just to have people in a room but to also have meetings over different phone lines . phd b: maybe far field mike people would n't be interested in that but all the dialogue issues still apply , so if each of us was calling and having a meeting that way you kn like a conference call . and , just the question is , y , in switchboard you would think that 's the simplest case of a meeting of more than one person , and i ' m wondering how much more overlap of the types that jane described happen with more people present . so it may be that having three people is very different from having two people or it may not be . professor d: that 's an important question to ask . what i ' m all i ' m s really saying is that i do n't think we were considering that in switchboard . phd b: there there 's actually to tell you the truth , the reason why it 's hard to measure is because of so , from the point of view of studying dialogue , which dan jurafsky and andreas and i had some projects on , you want to know the sequence of turns . so what happens is if you 're talking and i have a backchannel in the middle of your turn , and then you keep going what it looks like in a dialogue model is your turn and then my backchannel , even though my backchannel occurred completely inside your turn . phd b: so , for things like language modeling or dialogue modeling it 's we know that 's wrong in real time . but , because of the acoustic segmentations that were done and the fact that some of the acoustic data in switchboard were missing , people could n't study it , but that does n't mean in the real world that people do n't talk that way . so , it 's professor d: , i was n't saying that . i was just saying that w now we 're looking at it . professor d: and and , you maybe wanted to look at it before but , for these various technical reasons in terms of how the data was you were n't . professor d: so that 's why it 's coming to us as new even though it may be , if your hypothes the hypothesis you were offering right ? if it 's the null poth hypothesis , and if actually you have as much overlap in a two - person , we the answer to that . the reason we the answer to is cuz it was n't studied and it was n't studied because it was n't set up . right ? phd b: , all i meant is that if you 're asking the question from the point of view of what 's different about a meeting , studying meetings of , say , more than two people versus what kinds of questions you could ask with a two - person meeting . it 's important to distinguish that , this project is getting a lot of overlap but other projects were too , but we just could n't study them . and and so professor d: see , i le let me t , my point was just if you wanted to say to somebody , " what have we learned about overlaps here ? " just never mind comparison with something else , what we ' ve learned about is overlaps in this situation , is that the first - order thing i would say is that there 's a lot of them . in in the sense that i if you said if i professor d: in a way , i what i ' m comparing to is more the common sense notion of how much people overlap . the fact that when , adam was looking for a stretch of speech before , that did n't have any overlaps , and he w he was having such a hard time and now i look at this and i go , " , see why he was having such a hard time " . professor d: i ' m saying if i have this complicated thing in front of me , and we sh which , we 're gon na get much more sophisticated about when we get lots more data , but then , if i was gon na describe to somebody what did you learn right here , about , the modest amount of data that was analyzed i 'd say , " , the first - order thing was there was a lot of overlaps " . and it 's not just an overlap bunch of overlaps second - order thing is it 's not just a bunch of overlaps in one particular point , but that there 's overlaps , throughout the thing . phd b: i ' m just saying that it may the reason you get overlaps may or may not be due to the number of people in the meeting . and that 's all . phd b: and and it would actually be interesting to find out because some of the data say switchboard , which is n't exactly the same context , these are two people who each other and , but we should still be able to somehow say what is the added contra contribution to overlap time of each additional person , like that . postdoc f: and the reason is because there 's a limit there 's an upper bound on how many you can have , simply from the standpoint of audibility . when we speak we do make a judgment of " can " , as adults . , children do n't adjust so , if a truck goes rolling past , adults will , depending , but mostly , adults will hold off to what to finish the end of the sentence till the noise is past . and we generally do monitor things like that , about whether we whether our utterance will be in the clear or not . and partly it 's related to rhythmic structure in conversation , so , you t , this is d also , people tend to time their , when they come into the conversation based on the overall rhythmic , ambient thing . so you do n't want to be c cross - cutting . and and , just to finish this , that that that there may be an upper bound on how many overlaps you can have , simply from the standpoint of audibility and how loud the other people are who are already in the fray . but i , of certain types . now if it 's just backchannels , people may be doing that with less intention of being heard , just spontaneously doing backchannels , in which case that those might there may be no upper bound on those . phd g: i have a feeling that backchannels , which are the vast majority of overlaps in switchboard , do n't play as big a role here , because it 's very unnatural , to backchannel if in a multi - audience , in a multi - person audience . phd b: if you can see them , actually . it 's interesting , so if you watch people are going like right right , like this here , phd g: but but , it 's odd if one person 's speaking and everybody 's listening , and it 's unusual to have everybody going " - , - " professor d: actually , i ' ve done it a fair number of times today . but . phd g: plus plus the so so actually , that 's in part because the nodding , if you have visual contact , the nodding has the same function , but on the phone , in switchboard you that would n't work . so so you need to use the backchannel . phd a: so , in the two - person conversations , when there 's backchannel , is there a great deal of overlap in the speech ? grad h: that is an earphone , so if you just put it so it 's on your ear . phd a: it 's hard to do both , ? no , when there 's backchannel , just i was just listening , and when there 's two people talking and there 's backchannel it seems like , the backchannel happens when , the pitch drops and the first person and a lot of times , the first person actually stops talking and then there 's a backchannel and then they start up again , and so i ' m wondering about h wonder how much overlap there is . is there a lot ? phd b: there 's a lot of the kind that jose was talking about , where , this is called " precision timing " in conversation analysis , where they come in overlapping , but at a point where the information is mostly complete . so all you 're missing is some last syllables or the last word or some highly predictable words . phd b: but , from information flow point of view it 's not an overlap in the predictable information . phd b: , that 's exactly , exactly why we wanted to study the precise timing of overlaps ins in switchboard , say , because there 's a lot of that . phd g: so so here 's a first interesting labeling task . , to distinguish between , say , backchannels precision timing , benevolent overlaps , and w and , i , hostile overlaps , where someone is trying to grab the floor from someone else . postdoc f: , you could do that . i ju that in this meeting i really had the feeling that was n't happening , that the hostile type . these were these were benevolent types , as people finishing each other 's sentences , and . phd g: ok . , i could imagine that as there 's a fair number of cases where , and this is , not really hostile , but competitive , where one person is finishing something and you have , like , two or three people jumping trying to , grab the next turn . phd g: and so it 's not against the person who talks first because actually we 're all waiting for that person to finish . but they all want to be next . professor d: i have a feeling most of these things are that are not a benevolent kind are , are competitive as opposed to real really hostile . professor d: , o one thing i wanted to or you can tell a good joke and then everybody 's laughing and you get a chance to g break in . professor d: , the other thing i was thinking was that , these all these interesting questions are , pretty hard to answer with , u , a small amount of data . professor d: so , i wonder if what you 're saying suggests that we should make a conscious attempt to have , a fair number of meetings with , a smaller number of people . we most of our meetings are , meetings currently with say five , six , seven , eight people should we really try to have some two - person meetings , or some three - person meetings and re record them just to beef up the statistics on that ? postdoc f: that 's a control . , it seems like there are two possibilities there , i it seems like if you have just two people it 's not really , y like a meeting , w is not as similar as the rest of the sample . it depends on what you 're after , but it seems like that would be more a case of the control condition , compared to , an experimental condition , with more than two . professor d: , liz was raising the question of whether i it 's the number there 's a relationship between the number of people and the number of overlaps or type of overlaps there , and , if you had two people meeting in this circumstance then you 'd still have the visuals . you would n't have that difference also that you have in the say , in switchboard data . postdoc f: , i ' m just thinking that 'd be more like a c control condition . phd g: if if the goal were to just look at overlap you would you could serve yourself save yourself a lot of time but not even transcri transcribe the words . phd b: , i was thinking you should be able to do this from the acoustics , on the close - talking mikes , phd b: right . , not as what , you would n't be able to have any typology , but you 'd get some rough statistics . professor d: but what do you think about that ? do you think that would be useful ? i ' m just thinking that as an action item of whether we should try to record some two - person meetings . phd b: i my first comment was , only that we should n not attribute overlaps only to meetings , but maybe that 's obvious , maybe everybody knew that , but that in normal conversation with two people there 's an awful lot of the same kinds of overlap , and that it would be interesting to look at whether there are these kinds of constraints that jane mentioned , that what maybe the additional people add to this competition that happens right after a turn , because now you can have five people trying to grab the turn , but pretty quickly there 're they back off and you go back to this only one person at a time with one person interrupting at a time . so , i . to answer your question i it i do n't think it 's crucial to have controls but it 's worth recording all the meetings we can . phd b: d i would n't not record a two - person meeting just because it only has two people . phd g: could we , we have in the past and continue will continue to have a fair number of phone conference calls . and , and as a to , as another c comparison condition , we could see what happens in terms of overlap , when you do n't have visual contact . grad h: it just seems like that 's a very different thing than what we 're doing . phd g: or , this is getting a little extravagant , we could put up some blinds to remove , visual contact . phd b: that 's what they did on map task , this map task corpus ? they ran exactly the same pairs of people with and without visual cues and it 's quite interesting . professor d: , we record this meeting so regularly it would n't be that a little strange . professor d: , th that was the other thing , were n't we gon na take a picture at the beginning of each of these meetings ? grad h: , what i had thought we were gon na do is just take pictures of the whiteboards . rather than take pictures of the meeting . postdoc f: linguistic anthropologists would suggest it would be useful to also take a picture of the meeting . postdoc f: the because you get then the spatial relationship of the speakers . and that could be phd g: , you could do that by just noting on the enrollment sheet the seat number . grad h: seat number , that 's a good idea . i 'll do that on the next set of forms . grad h: the wireless ones . and even the jacks , i ' m sitting here and the jack is over in front of you . grad h: so i ' m gon na put little labels on all the chairs with the seat number . that 's a good idea . postdoc f: but , they the s the linguistic anthropologists would say it would be good to have a digital picture anyway , postdoc f: because you get a sense also of posture . posture , and we could like , block out the person 's face or whatever professor d: but if you just f but from one picture , i that you really get that . postdoc f: it 'd be better than nothing , is i just from a single picture you can tell some aspects . postdoc f: i could tell you , if i ' m in certain meetings i notice that there are certain people who really do the body language is very interesting in terms of the dominance aspect . postdoc f: . , you black out the that part . but it 's just , the body grad h: , the where we sit at the table , i find is very interesting , that we do tend to cong to gravitate to the same place each time . and it 's somewhat coincidental . i ' m sitting here so that run into the room if the hardware starts , catching fire . phd g: , no , you just like to be in charge , that 's why you 're sitting grad h: , i ' ve been playing with , using the close - talking mike to do to try to figure out who 's speaking . so my first attempt was just using thresholding and filtering , that we talked about two weeks ago , and so i played with that a little bit , and it works o k , except that it 's very sensitive to your choice of your filter width and your threshold . so if you fiddle around with it a little bit and you get good numbers you can actually do a pretty good job of segmenting when someone 's talking and when they 're not . but if you try to use the same paramenters on another speaker , it does n't work anymore , even if you normalize it based on the absolute loudness . grad h: the algorithm was , take o every frame that 's over the threshold , and then median - filter it , and then look for runs . so there was a minimum run length , grad h: so you take a each frame , and you compute the energy and if it 's over the threshold you set it to one , and if it 's under the threshold you set it to zero , so now you have a bit stream of zeros and ones . and then i median - filtered that using , a fairly long filter length . , actually i depends on what you mean by long , tenth of a second sorts of numbers . and that 's to average out , pitch , the pitch contours , and things like that . and then , looked for long runs . grad h: and that works o k , if you fil if you tune the filter parameters , if you tune how long your median filter is and how high you 're looking for your thresholds . grad h: i certainly could though . but this was just i had the program mostly written already so it was easy to do . ok and then the other thing i did , was i took javier 's speaker - change detector acoustic - change detector , and i implemented that with the close - talking mikes , and unfortunately that 's not working real , and it looks like it 's the problem is he does it in two passes , the first pass is to find candidate places to do a break . and he does that using a neural net doing broad phone classification and he has the , one of the phone classes is silence . and so the possible breaks are where silence starts and ends . and then he has a second pass which is a modeling a gaussian mixture model . looking for whether it improves or degrades to split at one of those particular places . and what looks like it 's happening is that the even on the close - talking mike the broad phone class classifier 's doing a really bad job . grad h: , i have no idea . i do n't remember . does an do you remember , morgan , was it broadcast news ? grad h: so , at any rate , my next attempt , which i ' m in the midst of and have n't quite finished yet was actually using the , thresholding as the way of generating the candidates . because one of the things that definitely happens is if you put the threshold low you get lots of breaks . all of which are definitely acoustic events . they 're definitely someone talking . but , like , it could be someone who is n't the person here , but the person over there or it can be the person breathing . and then feeding that into the acoustic change detector . and so that might work . but , i have n't gotten very far on that . but all of this is close - talking mike , so it 's , just trying to get some ground truth . phd e: only with , but i , when , y i saw the speech from pda and , close talker . the there is a great difference in the signal . phd e: but i that in the mixed file you can find , zone with , great different , level of energy . phd e: for , algorithm based on energy , that h mmm , more or less , like , mmm , first sound energy detector . phd e: nnn . when y you the detect the first at the end of the detector of , ehm princ . what is the name in english ? the , mmm , the de detector of , ehm of a word in the s in an isolated word in the background that , phd e: it 's probably to work , because , you have , in the mixed files a great level of energy . and great difference between the sp speaker . and probably is not so easy when you use the pda , that because the signal is , the in the e energy level . in that , speech file is , more similar . between the different , speaker , is , it will i is my opinion . phd e: it will be , more difficult to detect bass - tone energy . the change . that , phd e: and the another question , that when i review the work of javier . the , nnn , that the idea of using a neural network to get a broad class of phonetic , from , a candidate from the speech signal . if you have , i ' m considering , only because javier , only consider , like candidate , the , nnn , the silence , because it is the only model , he used that , nnn , to detect the possibility of a change between the speaker , another research thing , different groups , working , on broadcast news prefer to , to consider hypothesis between each phoneme . phd e: because , i it 's more realistic that , only consider the silence between the speaker . there exists silence between , a speaker . is , acoustic , event , important to consider . i found that the , silence in many occasions in the speech file , but , when you have , two speakers together without enough silence between them , is better to use the acoustic change detector and i ix or , mmm , bic criterion for consider all the frames in my opinion . professor d: , the reason that he , just used silence was not because he thought it was better , it was the place he was starting . so , he was trying to get something going , and , e , as is in your case , if you 're here for only a modest number of months you try to pick a realistic goal , professor d: but his goal was always to proceed from there to then allow broad category change also . phd e: but , do you think that if you consider all the frames to apply the , the bic criterion to detect the different acoustic change , between speaker , without , with , silence or with overlapping , like , a general , way of process the acoustic change . in a first step , . an - and then , without considering the you , you can consider the energy like a another parameter in the feature vector , this this is the idea . and if , if you do that , with a bic criterion , or with another , of distance in a first step , and then you , you get the , the hypothesis to the this change acoustic , to po process because , probably you can find the a small gap of silence between speaker with a ga mmm , small duration less than , two hundred milliseconds and apply another algorithm , another approach like , detector of ene , detector of bass - tone energy to consider that , zone . of s a small silence between speaker , or another algorithm to process , the segment between marks founded by the bic criterion and applied for each frame . is , nnn , it will be a an a more general approach the if we compare with use , a neural net or another , speech recognizer with a broad class or narrow class , because , in my opinion it 's in my opinion , if you change the condition of the speech , if you adjust to your algorithm with a mixed speech file and to , adapt the neural net , used by javier with a mixed file . with a m mixed file , phd e: and and then you , you try to apply that , , speech recognizer to that signal , to the pda , speech file , you will have problems , because the condition you will need t i suppose that you will need to retrain it . professor d: u look , this is a one once it 's a i used to work , like , on voiced on voice silence detection , and this is this thing . professor d: if you have somebody who has some experience with this thing , and they work on it for a couple months , they can come up with something that gets most of the cases fairly easily . then you say , " ok , i do n't just wanna get most of the cases i want it to be really accurate . " then it gets really hard no matter what you do . so , the p the problem is that if you say , " i have these other data over here , that i learn things from , either explicit training of neural nets or of gaussian mixture models or whatever . " suppose you do n't use any of those things . you say you have looked for acoustic change . , what does that mean ? that that means you set some thresholds somewhere , and so where do you get your thresholds from ? from something that you looked at . so you always have this problem , you 're going to new data h how are you going to adapt whatever you can very quickly learn about the new data ? , if it 's gon na be different from old data that you have ? and that 's a problem with this . grad h: , also what i ' m doing right now is not intended to be an acoustic change detector for far - field mikes . what i ' m doing is trying to use the close - talking mike and just use can - and just generate candidate and just try to get a first pass at something that works . grad h: and i have n't spent a lot of time on it and i ' m not intending to spend a lot of time on it . phd g: but , imagine building a model of speaker change detection that takes into account both the far - field and the actually , not just the close - talking mike for that speaker , but actually for all of th for all of the speakers . phd g: if you model the effect that me speaking has on your microphone and everybody else 's microphone , as on that , and you build , you 'd you would build a an hmm that has as a state space all of the possible speaker combinations professor d: but actually , andreas may maybe just something simpler but along the lines of what you 're saying , professor d: i was just realizing , i used to know this guy who used to build , mike mixers automatic mike mixers where , t in order to able to turn up the gain , as much as you can , you lower the gain on the mikes of people who are n't talking , professor d: and then he had some reasonable way of doing that , but , what if you were just looking at very simple measures like energy measures but you do n't just compare it to some threshold overall but you compare it to the energy in the other microphones . grad h: i was thinking about doing that originally to find out who 's the loudest , and that person is certainly talking . but i also wanted to find threshold , excuse me , mol overlap . so , not just the loudest . phd e: but , i . i have found that when i analyzed the speech files from the , mike , from the close microphone , i found zones with a different level of energy . grad h: could you fill that out anyway ? just , put your name in . are y you want me to do it ? i 'll do it . phd e: including overlap zone . including . because , depend on the position of the microph of the each speaker to , to get more o or less energy i in the mixed sign in the signal . and then , if you consider energy to detect overlapping in , and you process the in the speech file from the mixed signals . the mixed signals , . it 's difficult , only to en with energy to consider that in that zone we have , overlapping zone , if you process only the energy of the , of each frame . professor d: , it 's probably harder , but what i was s nnn noting just when he when andreas raised that , was that there 's other information to be gained from looking of the microphones and you may not need to look at very sophisticated things , because if there 's if most of the overlaps , this does n't cover , say , three , but if most of the overlaps , say , are two , if the distribution looks like there 's a couple high ones and the rest of them are low , professor d: , what , there 's some information there about their distribution even with very simple measures . , i had an idea with while i was watching chuck nodding at a lot of these things , is that we can all wear little bells on our heads , so that then you 'd know that phd a: actually , i saw a woman at the bus stop the other day who , was talking on her cell phone speaking japanese , and was bowing . , profusely . phd b: it 's very difficult if you try while you 're trying , say , to convince somebody on the phone it 's difficult not to move your hands . not , if you watch people they 'll actually do these things . i still think we should try a meeting or two with the blindfolds , at least of this meeting that we have lots of recordings of , maybe for part of the meeting , we do n't have to do it the whole meeting . phd b: that could be fun . it 'll be too hard to make barriers , i was thinking because they have to go all the way postdoc f: actually also say i made barr barriers for so that the i was doing with collin wha which just used , this foam board . postdoc f: r really inexpensive . you can you can masking tape it together , these are , pretty l large partitions . phd b: but then we also have these mikes , is the other thing i was thinking , so we need a barrier that does n't disturb the sound , phd b: , it sounds weird but it 's cheap and , be interesting to have the camera going . postdoc f: we 're going to have to work on the , on the human subjects form . professor d: that 's the one that we videotape . , i wanna move this along . i did have this other agenda item which is , @ it 's a list which i sent to a couple folks , but i wanted to get broader input on it , so this is the things that we did in the last three months not everything we did but highlights that tell s some outside person , what were you actually working on . in no particular order , one , ten more hours of meeting r meetings recorded , something like that , from , three months ago . xml formats and other transcription aspects sorted out and sent to ibm . , pilot data put together and sent to ibm for transcription , next batch of recorded data put together on the cd - roms for shipment to ibm , professor d: but , that 's why i phrased it that way , ok . human subjects approval on campus , and release forms worked out so the meeting participants have a chance to request audio pixelization of selected parts of the spee their speech . audio pixelization software written and tested . preliminary analysis of overlaps in the pilot data we have transcribed , and exploratory analysis of long - distance inferences for topic coherence , that was i was was n't if those were the right way that was the right way to describe that because of that little exercise that you and lokendra did . professor d: i , i ' m probably saying this wrong , but what i said was exploratory analysis of long - distance inferences for topic coherence . professor d: something like that . so , i a lot of that was from , what you two were doing so i sent it to you , and , mail me , the corrections or suggestions for changing i do n't want to make this twice it 's length but , just i m improve it . is there anything anybody professor d: bunch of for s ok , maybe send me a sentence that 's a little thought through about that . grad h: so , ok , i 'll send you a sentence that does n't just say " a bunch of " ? professor d: range of things , . and , i threw in what you did with what jane did on in under the , preliminary analysis of overlaps . thilo , can you tell us about all the work you ' ve done on this project in the last , last three months ? phd c: , i did n't get it . wh - what is " audio pixelization " ? professor d: , audio pix wh he did it , so why do n't you explain it quickly ? grad h: it 's just , beeping out parts that you do n't want included in the meeting so , you can say things like , " , this should probably not be on the record , but beep " professor d: we we spent a fair amount of time early on just talk dealing with this issue about op w e we realized , " , people are speaking in an impromptu way and they might say something that would embarrass them or others later " , and , how do you get around that so in the consent form it says , you we will look at the transcripts later and if there 's something that you 're unhappy with , . professor d: but you do n't want to just excise it because , you have to be careful about excising it , how you excise it keeping the timing right and so that at the moment tho th the idea we 're running with is h putting the beep over it . grad h: , you can either beep or it can be silence . i could n't decide . which was the right way to do it . beep is good auditorily , if someone is listening to it , there 's no mistake that it 's been beeped out , but for software it 's probably better for it to be silence . phd a: no , no . you can , you could make a m as long as you keep using the same beep , people could make a model of that beep , and postdoc f: it 's very clear . then you do n't think it 's a long pause . phd a: , it 's more obvious that there was something there than if there 's just silence . phd a: yea - right . right . but if you just replaced it with silence , it 's not clear whether that 's really silence or postdoc f: one one question . do you do it on all channels ? interesting . i like that . , i like that . grad h: , i have n't thrown away any of the meetings that i beeped . actually yours is the only one that i beeped and then , the ar darpa meeting . grad h: , and then the darpa meeting excised completely , so it 's in a private directory . postdoc f: i have one concept a t i want to say , which is that it 's that you 're preserving the time relations , s so you 're not just cutting you 're not doing scissor snips . you 're you 're keeping the , the time duration of a de - deleted part . grad h: , since we wanna possibly synchronize these things as . , i should have done that . phd b: so i if there 's an overlap , like , if i ' m saying something that 's bleepable and somebody else overlaps during it they also get bleeped , too ? professor d: i d i did before we do the digits , i did also wanna remind people , do send me , thoughts for an agenda , grad h: can you do it for them ? " and , no actually , you ca n't . phd a: actually actually that 's what you were giving us was another meeting and i was like , " , ok ! " phd b: how long does it take , just briefly , like t to ok . to label the , postdoc f: no . i have the script now , so , it can work off the , other thing , postdoc f: but , . because , once his algorithm is up and running then we can do it that way . grad h: if it works enough . right now it 's not . not quite to the point where it works . postdoc f: appreciate that . what i what this has , caused me so this discussion caused me to wanna subdivide these further . i ' m gon na take a look at the , backchannels , how much we have anal i hope to have that for next time . grad h: , my algorithm worked great actually on these , but when you wear it like that or with the , lapel or if you have it very far from your face , that 's when it starts failing . grad h: it 's so , it was just a comment on the software , not a comment on prescriptions on how you wear microphones . postdoc f: do you want us to put a mark on the bottom of these when they ' ve actually been read , or do you just i the only one that was n't read is known , so we do n't do it . ###summary: the berkeley meeting recorder group focussed its discussion on overlapping speech segments. speaker fe008 presented raw counts and percentages for one transcribed meeting , revealing a large number of overlaps throughout the 40-plus-minute transcript. efforts by speakers fe008 and fe016 are in progress to categorize and subcategorize types of overlapping speech and evaluate the contribution of multiple speakers in an interaction to the amount and types of overlap observed. speaker me011 described his attempts to automatically identify speakers via the close-talking microphone channels using thresholding and filtering methods and an existing speaker-change detection algorithm. the group also tentatively discussed the erection of visual barriers during meeting recordings , and speaker me013 presented a list of work performed by bmr over the previous three months to be included in a forthcoming report to ibm. for future meetings , speaker me011 will generate a system for mapping speakers and their positions in the recording room. speaker fe008 will analyze backchannels for a subset of meeting data and givee a report in the next meeting. for language and dialogue modelling , current methods of marking and segmenting overlap are abstracted from real time , as individual speaker turns are indicated sequentially. a large amount of data must be collected to address research questions concerning overlapping speech. for automatic speaker identification , thresholding and filtering methods are sensitive regarding the particular filter width and threshold selected. while such parameters can be finely tuned for one speaker to achieve good results , extending the same parameters to another speaker is problematic. the broad phone classifier of the speaker-change detector is peforming poorly. the prospect of erecting visual barriers during meetings would require partitioning off each of the participants. also , barriers that do not affect the overall room acoustics would be required. efforts are in progress to mark where regions of speaker overlap occur in meeting transcripts and note the number of speakers involved. such information is currently being encoded within relatively loose time boundaries. a large number of overlapping speech regions were identified throughout one recorded meeting , wherein overlaps were found to occur in bursts , rather than being evenly distributed throughout the meeting. a cursory analysis was done on regions of overlap involving two speakers to determine whether speakers are more likely to be overlapped with or to cause overlap with other speakers. attempts were also made to classify types of speaker overlap---e.g . backchannels , answering questions as they are being asked , and responding in unison---with future work focussed on subcategorizing types of backchannels. speaker fe016 is interested in the contribution of multiple speakers in an interaction to the amount and types of overlap observed , and comparing this to findings from the switchboard corpus. future work includes generating predictive models of overlap , and the tentative erection of visual barriers during meeting recordings. speaker me011 described his attempts to automatically identify speakers via the close-talking microphone channels using thresholding and filtering methods and an existing speaker-change detection algorithm.
7
grad g: so maybe what 's causing it to crash is i keep starting it and then stopping it to see if it 's working . and so starting it and then stopping it and starting it again causes it to crash . so , i wo n't do that anymore . postdoc b: and it looks like you ' ve found a way of mapping the location to the without having people have to give their names each time ? grad g: so they should be right with what 's on the digit forms . ok , so i 'll go ahead and start with digits . u and i should say that , you just pau you just read each line an and then pause briefly . professor e: so , you see , don , the unbridled excitement of the work that we have on this project . grad g: but that would be a good thing to add . after printed out a zillion of them . professor e: , that 's , so i do have a an agenda suggestion . , we the things that we talk about in this meeting tend to be a mixture of procedural mundane things and research points and i was thinking it was a meeting a couple of weeks ago that we spent much of the time talking about the mundane cuz that 's easier to get out of the way and then we drifted into the research and maybe five minutes into that andreas had to leave . so i ' m suggesting we turn it around and we have anybody has some mundane points that we could send an email later , hold them for a bit , and let 's talk about the research - y things . , so the one th one thing i know that we have on that is we had talked a couple weeks before about the you were doing with l attempting to locate events , we had a little go around trying to figure out what you meant by " events " but , what we had meant by " events " i was points of overlap between speakers . but i th i gather from our discussion a little earlier today that you also mean interruptions with something else like some other noise . professor e: and then the other thing would be it might be to have a preliminary discussion of some of the other research areas that we 're thinking about doing . , especially since you have n't been in these meetings for a little bit , maybe you have some discussion of some of the p the plausible things to look at now that we 're starting to get data , and one of the things i know that also came up is some discussions that jane had with lokendra about some work about i d i do n't want to try to say cuz i 'll say it wrong , but anyway some potential collaboration there about the working with these data . so , . professor e: . , i if we if this is like everybody has something to contribute thing , there 's just a couple people primarily but , wh why do n't actually that last one said we could do fairly quickly so why do n't you start with that . postdoc b: , so , he was interested in the question of , relating to his to the research he presented recently , of inference structures , and , the need to build in , this mechanism for understanding of language . and he gave the example in his talk about how , e a i ' m remembering it just off the top of my head right now , but it 's something about how , i " joe slipped " , " john had washed the floor " like that . and i do n't have it quite right , but that thing , where you have to draw the inference that , ok , there 's this time sequence , but also the causal aspects of the floor and how it might have been the of the fall and that it was the other person who fell than the one who cleaned it and it these sorts of things . so , i looked through the transcript that we have so far , and , fou identified a couple different types of things of that type and , one of them was something like , during the course of the transcript , w we had gone through the part where everyone said which channel they were on and which device they were on , and , the question was raised " , should we restart the recording at this point ? " and and dan ellis said , " , we 're just so far ahead of the game right now we really do n't need to " . now , how would you interpret that without a lot of inference ? so , the inferences that are involved are things like , ok , so , how do you interpret " ahead of the game " ? . so it 's the it 's i what you what you int what you draw , the conclusions that you need to draw are that space is involved in recording , postdoc b: that , i that i we have enough space , and he continues , like " we 're so ahead of the game cuz now we have built - in downsampling " . so you have to get the idea that , " ahead of the game " is sp speaking with respect to space limitations , that downsampling is gaining us enough space , and that therefore we can keep the recording we ' ve done so far . but there are a lot of different things like that . grad g: so , do you think his interest is in using this as a data source , or training material , or what ? professor e: , i should maybe interject to say this started off with a discussion that i had with him , so we were trying to think of ways that his interests could interact with ours professor e: and that if we were going to project into the future when we had a lot of data , and such things might be useful for that in or before we invested too much effort into that he should , with jane 's help , look into some of the data that we 're already have and see , is there anything to this ? is there any point which you think that , you could gain some advantage and some potential use for it . cuz it could be that you 'd look through it and you say " , this is just the wrong task for him to pursue his " professor e: and and i got the impression from your mail that there was enough things like this just in the little sample that you looked at that it 's plausible at least . postdoc b: it 's possible . , he was he we met and he was gon na go and , y look through them more systematically and then meet again . so it 's , not a matter of a but , i think it was optimistic . professor e: so anyway , that 's e a quite different thing from anything we ' ve talked about that , might come out from some of this . postdoc b: that 's his major i mentioned several that w had to do with implications drawn from intonational contours postdoc b: and that was n't as directly relevant to what he 's doing . he 's interested in these knowledge structures , professor e: , he certainly could use text , but we were looking to see if there is there something in common between our interest in meetings and his interest in this . grad g: and i imagine that transcripts of speech text that is speech probably has more of those than prepared writing . i whether it would or not , but it seems like it would . postdoc b: , i do n't think i would make that leap , because i in narratives , if you spell out everything in a narrative , it can be really tedious , so . grad g: , i ' m just thinking , when you 're face to face , you have a lot of backchannel and and grad g: and so it 's just easier to do that broad inference jumping if it 's face to face . , so , if read that dan was saying " we 're ahead of the game " in that context , i might not realize that he was talking about disk space as opposed to anything else . postdoc b: i , i had several that had to do with backchannels and this was n't one of them . this this one really does m make you leap from so he said , " we 're ahead of the game , w we have built - in downsampling " . and the inference , i if you had it written down , would be postdoc b: - . but there are others that have backchannelling , it 's just he was less interested in those . phd f: can i to interrupt . , i f i ' ve @ d a minute , several minutes ago , i , like , briefly was not listening and so who is " he " in this context ? phd f: ok . so i was just realizing we ' ve you guys have been talking about " he " for at least , i , three four minutes without ever mentioning the person 's name again . phd c: i believe it . actually to make it worse , morgan uses " you " and " you " phd f: so this is gon na be a big , big problem if you want to later do , indexing , or speech understanding of any sort . phd c: with gaze and no identification , or wrote this down . , actually . cuz morgan will say , " you had some ideas " phd c: because , because he 's looking at the per even for addressees in the conversation , phd c: i bet you could pick that up in the acoustics . just because your gaze is also correlated with the directionality of your voice . phd c: , so that , to even know when , if you have the p z ms you should be able to pick up what a person is looking at from their voice . grad g: , especially with morgan , with the way we have the microphones arranged . i ' m right on axis and it would be very hard to tell . grad g: , but if i ' m talking like this ? right now i ' m looking at jane and talking , now i ' m looking at chuck and talking , i do n't think the microphones would pick up that difference . grad g: so if i ' m talking at you , or i ' m talking at you . professor e: i probably been affect no , i th i ' ve been affected by too many conversations where we were talking about lawyers and talking about and concerns about " gee is somebody going to say something bad ? " and so on . professor e: and so i ' m i ' m tending to stay away from people 's names even though phd c: even though you could pick up later on , just from the acoustics who you were t who you were looking at . professor e: no no , it is n't sensitive . i was just i was overreacting just because we ' ve been talking about it . postdoc b: i came up with something from the human subjects people that i wanted to mention . , it fits into the m area of the mundane , but they did say , i asked her very specifically about this clause of how , , it says " no individuals will be identified , " in any publication using the data . " ok , individuals being identified , let 's say you have a snippet that says , " joe s thinks such - and - such about this field , but he 's wrongheaded . " now , we 're gon na be careful not to have the " wrongheaded " part in there , but , let 's say we say , " joe used to - and - so about this area , in his publication he says that but he 's changed his mind . " or whatever . then the issue of being able to trace joe , because we know he 's - known in this field , and all this and tie it to the speaker , whose name was just mentioned a moment ago , can be sensitive . postdoc b: so it 's really adaptive and wise to not mention names any more than we have to because if there 's a slanderous aspect to it , then how much to we wanna be able to have to remove ? professor e: , there 's that . but also to some extent it 's just educating the human subjects people , in a way , because there 's if , there 's court transcripts , there 's transcripts of radio shows people say people 's names all the time . so it ca n't be bad to say people 's names . it 's just that i you 're right that there 's more poten if we never say anybody 's name , then there 's no chance of slandering anybody , phd c: . we should do whatever 's natural in a meeting if we were n't being recorded . postdoc b: , my feeling on it was that it was n't really important who said it , phd f: , if you ha since you have to go over the transcripts later anyway , you could make it one of the jobs of the people who do that to mark grad g: , we t we talked about this during the anon anonymization . if we wanna go through and extract from the audio and the written every time someone says a name . and that our conclusion was that we did n't want to do that . professor e: , we really ca n't . but a actually , i ' m . i really would like to push finish this off . postdoc b: i understand . no i just was suggesting that it 's not a bad policy p potentially . professor e: , i di i did n't intend it an a policy though . it was it was just unconscious , semi - conscious behavior . i sorta knew i was doing it but it was phd c: no , you have to say , you still who " he " is , with that prosody . professor e: , we were talking about dan at one point and we were talking about lokendra at another point . professor e: , you could do all these inferences , i would like to move it into what jose has been doing because he 's actually been doing something . phd d: - ok . i remind that me my first objective , in the project is to study difference parameters to find a good solution to detect , the overlapping zone in speech recorded . but , tsk , ehhh in that way i begin to study and to analyze the ehn the recorded speech the different session to find and to locate and to mark the different overlapping zone . and so i was i am transcribing the first session and i have found , one thousand acoustic events , besides the overlapping zones , i the breaths aspiration , talk , clap , i what is the different names you use to name the n speech grad g: , i do n't think we ' ve been doing it at that level of detail . so . phd d: , i do i do n't need to mmm to m to label the different acoustic , but i prefer because i would like to study if , i will find , a good parameters to detect overlapping i would like to test these parameters with the another , acoustic events , to nnn to find what is the ehm the false , the false hypothesis , nnn , which are produced when we use the ehm this parameter pitch , difference , feature phd a: some of these that are the nonspeech overlapping events may be difficult even for humans to tell that there 's two there . phd a: , if it 's a tapping sound , you would n't necessarily or , something like that , it 'd be it might be hard to know that it was two separate events . phd d: but my objective will be to study overlapping zone . ? n in twelve minutes i found , one thousand acoustic events . professor e: how many overlaps were there in it ? no no , how many of them were the overlaps of speech , though ? postdoc b: does this ? so if you had an overlap involving three people , how many times was that counted ? phd d: , three people , two people . , i would like to consider one people with difference noise in the background , be professor e: no no , but what she 's asking is if at some particular for some particular stretch you had three people talking , instead of two , did you call that one event ? phd d: i consider one event for th for that for all the zone . this th i con i consider an acoustic event , the overlapping zone , the period where three speaker or are talking together . grad g: so let 's say me and jane are talking at the same time , and then liz starts talking also over all of us . how many events would that be ? phd d: no . no no . for me is the overlapping zone , because you have s you have more one , more one voice , produced in a moment . professor e: so then , in the region between since there is some continuous region , in between regions where there is only one person speaking . and one contiguous region like that you 're calling an event . is it are you calling the beginning or the end of it the event , or are you calling the entire length of it the event ? phd d: i consider the , nnn the nnn , the entirety , all the time there were the voice has overlapped . this is the idea . but i do n't distinguish between the numbers of speaker . , i ' m not considering the ehm , the fact of , what did you say ? at first , two talkers are , speaking , and , a third person join to that . for me , it 's , all overlap zone , with several numbers of speakers is , the same acoustic event . wi - but , without any mark between the zone of the overlapping zone with two speakers speaking together , and the zone with the three speakers . phd d: it one . , with , a beginning mark and the ending mark . because for me , is the zone with some distortion the spectral . grad g: , but but you could imagine that three people talking has a different spectral characteristic than two . phd d: i do n't , but i have to study . what will happen in a general way , professor e: so again , that 's three hundred in forty - five minutes that are speakers , just speakers . postdoc b: , but a thousand taps in eight minutes is a l in twelve minutes is a lot . phd d: , silent , ground to bec to detect because i consider acoustic event all the things are not speech . in ge in a general point of view . phd d: silent , i do n't i have n't the i would like to do a stylistic study and give you with the report from the study from the session one session . and i found that another thing . when i w i was look at nnn , the difference speech file , , if we use the ehm the mixed file , to transcribe , the events and the words , i saw that the speech signal , collected by the this mike of this mike , are different from the mixed signal , we collected by headphone . and it 's right . phd d: but the problem is the following . the the i knew that the signal , would be different , but the problem is , we detected difference events in the speech file collected by that mike qui compared with the mixed file . and so if when you transcribe only using the nnn the mixed file , it 's possible if you use the transcription to evaluate a different system , it 's possible you in the i and you use the speech file collected by the fet mike , to nnn to do the experiments with the system , its possible to evaluate , or to consider acoustic events that which you marked in the mixed file , but they do n't appear in the speech signal collected by the mike . grad g: the the reason that i generated the mixed file was for ibm to do word level transcription , not speech event transcription . grad g: so i agree that if someone wants to do speech event transcription , that the mixed signals here , if i ' m tapping on the table , you it 's not gon na show up on any of the mikes , but it 's gon na show up rather loudly in the pzm . phd d: so and i say that , or this only because i c i in my opinion , it 's necessary to put the transcription on the speech file , collected by the objective signal . the signal collected by the , the real mike in the future , in the prototype to correct the initial segmentation with the real speech professor e: , just in that one s ten second , or whatever it was , example that adam had that we passed on to others a few months ago , there was that business where i g i it was adam and jane were talking at the same time and , in the close - talking mikes you could n't hear the overlap , and in the distant mike you could . so , it 's clear that if you wanna study if you wanna find all the places where there were overlap , it 's probably better to use a distant mike . professor e: on the other hand , there 's other phenomena that are going on at the same time for which it might be useful to look at the close - talking mikes , phd c: but why ca n't you use the combination of the close - talking mikes , time aligned ? grad g: if you use the combination of the close - talking mikes , you would hear jane interrupting me , but you would n't hear the paper rustling . and so if you 're interested in phd c: if you 're interested in speakers overlapping other speakers and not the other kinds of nonspeech , that 's not a problem , grad g: although the other issue is that the mixed close - talking mikes , i ' m doing weird normalizations and things like that . phd c: but it 's known . , the normalization you do is over the whole conversation is n't it , over the whole meeting . so if you wanted to study people overlapping people , that 's not a problem . phd d: i saw the nnn the but i have any results . i saw the speech file collected by the fet mike , and signal to noise relation is low . it 's very low . you would comp if we compare it with the headphone . and i found that nnn that , ehm , pr probably , phd d: i ' m not by the moment , but it 's probably that a lot of , in the overlapping zone , on in several parts of the files where you can find , smooth speech from one talker in the meeting , it 's probably in that in those files you can not find you can not process because it 's confused with noise . and there are a lot of but i have to study with more detail . but my idea is to process only nnn , this s of speech . because it 's more realistic . i ' m not it 's a good idea , but professor e: , it 'd be hard , but on the other hand as you point out , if your if i if your concern is to get the overlapping people 's speech , you will get that somewhat better . , are you making any use you were working with th the data that had already been transcribed . professor e: does it yes . now did you make any use of that ? see i was wondering cuz we st we have these ten hours of other that is not yet transcribed . do you phd d: the the transcription by jane , t i , i want to use to nnn , to put i it 's a reference for me . but the transcription , i do n't i ' m not interested in the words , transcription words , transcribed in follow in the in the speech file , but jane put a mark at the beginning of each talker , in the meeting , she nnn includes information about the zone where there are there is an overlapping zone . but there is n't any mark , time temporal mark , to c to mmm e - heh , to label the beginning and the end of the professor e: so the twelve you it took you twelve hours this included maybe some time where you were learning about what you wanted to do , but , it took you something like twelve hours to mark the forty - five minutes , your phd d: tw - twelve hours of work to segment and label twelve minutes from a session of part of f professor e: so let me back up again . so the when you said there were three hundred speaker overlaps , that 's in twelve minutes ? phd d: no no . i consider all the session because i count the nnn the overlappings marked by jane , professor e: so it 's three hundred in forty - five minutes , but you have time , marked twelve minute the overlaps in twelve minutes of it . phd f: so , can i ask whether you found , how accurate jane 's labels were as far as phd d: but , by the moment , i do n't compare , my temporal mark with jane , but i want to do it . because i per perhaps i have errors in the marks , i and if i compare with jane , it 's probably i correct and to get a more accurately transcription in the file . grad g: , also jane was doing word level . so we were n't concerned with exactly when an overlap started and stopped . phd c: , you did n't need to show the exact point of interruption , you just were showing at the level of the phrase or the level of the speech spurt , or postdoc b: , b , i would say time bin . so my goal is to get words with reference to a time bin , beginning and end point . and and sometimes , it was like you could have an overlap where someone said something in the middle , but , w it just was n't important for our purposes to have it that i disrupt that unit in order to have , a the words in the order in which they were spoken , it would have been hard with the interface that we have . now , my a adam 's working on a , on a revised overlapping interface , professor e: , i but i have a suggestion about that . , this is very , very time - consuming , and you 're finding lots of things which i ' m are gon na be very interesting , but in the interests of making progress , might i s how would it affect your time if you only marked speaker overlaps ? professor e: do not mark any other events , but only mark speaker do you think that would speed it up quite a bit ? professor e: do y do you think that would speed it up ? , speed up your marking ? professor e: , and my question is , if you did that , if you followed my suggestion , would it take much less time ? then it 's a good idea . then it 's a good idea , because it phd d: because i need a lot of time to put the label or to do that . professor e: there 's there 's continual noise from fans and , and there is more impulsive noise from taps and something in between with paper rustling . we know that all that 's there and it 's a g worthwhile thing to study , but it takes a lot of time to mark all of these things . whereas th i i would think that you we can study more or less as a distinct phenomenon the overlapping of people talking . so . then you can get the cuz you need if it 's three hundred i it sounds like you probably only have fifty or sixty or seventy events right now that are really and and you need to have a lot more than that to have any even visual sense of what 's going on , much less any reasonable statistics . phd c: now , why do you need to mark speaker overlap by hand if you can infer it from the relative energy in the professor e: , ok , . so let 's back up because you were n't here for an earlier conversation . professor e: so the idea was that what he was going to be doing was experimenting with different measures such as the increase in energy , such as the energy in the lpc residuals , such as there 's a bunch of things , increased energy is - is an obvious one . professor e: , and , it 's not obvious , you could do the dumbest thing and get it ninety percent of the time . but when you start going past that and trying to do better , it 's not obvious what combination of features is gon na give you the , the right detector . so the idea is to have some ground truth first . and so the i the idea of the manual marking was to say " ok this , i , it 's really here " . professor e: we we talked about that . s but so it 's a bootstrapping thing and , the idea was , i we thought it would be useful for him to look at the data anyway , and then whatever he could mark would be helpful , and we could it 's a question of what you bootstrap from . , do you bootstrap from a simple measurement which is right most of the time and then you g do better , or do you bootstrap from some human being looking at it and then do your simple measurements , from the close - talking mike . , even with the close - talking mike you 're not gon na get it right all the time . phd c: , that 's what i wonder , because or how bad it is , be , because that would be interesting phd c: especially because the bottleneck is the transcription . right ? , we ' ve got a lot more data than we have transcriptions for . we have the audio data , we have the close - talking mike , so it seems like one project that 's not perfect , but , that you can get the training data for pretty quickly is , if you infer form the close - talking mikes where the on - off points are of speech , grad g: and so but it 's doing something very , very simple . it just takes a threshold , based on the volume , phd f: or you can set the threshold low and then weed out the false alarms by hand . grad g: , and then it does a median filter , and then it looks for runs . and , it seems to work , i ' ve i ' m fiddling with the parameters , to get it to actually generate something , and i have n't i do n't what i ' m working on was getting it to a form where we can import it into the user interface that we have , into transcriber . and so i told i said it would take about a day . i ' ve worked on it for about half a day , grad g: so give me another half day and i we 'll have something we can play with . professor e: see , this is where we really need the meeting recorder query to be working , because we ' ve had these meetings and we ' ve had this discussion about this , and i ' m remembering a little bit about what we decided , but i could n't remember all of it . professor e: so , it was partly that , give somebody a chance to actually look at the data and see what these are like , partly that we have e some ground truth to compare against , when he gets his thing going , phd c: , it 's definitely good to have somebody look at it . i was just thinking as a way to speed up , the amount of postdoc b: was that there m there was this already a script i believe that dan had written , that handle bleedthrough , cuz you have this close you have contamination from other people who speak loudly . grad g: and i have n't tried using that . it would probably help the program that i ' m doing to first feed it through that . it 's a cross - correlation filter . so i have n't tried that , but that if it it might be something it might be a good way of cleaning it up a little . postdoc b: so , some thought of maybe having , having that be a preprocessor and then run it through yours . professor e: and we wanna see try the simple thing first , cuz you add this complex thing up afterwards that does something good y yo you wanna see what the simple thing does first . but , having somebody have some experience , again , with marking it from a human standpoint , we 're , i do n't expect jose to do it for f fifty hours of speech , but we if he could speed up what he was doing by just getting the speaker overlaps so that we had it , say , for forty - five minutes , then at least we 'd have three hundred examples of it . professor e: and when adam was doing his automatic thing he could then compare to that and see what it was different . phd a: , i did something almost identical to this at one of my previous jobs , and it works pretty . , i almost exactly what you described , an energy detector with a median filter , you look for runs . and , you can phd a: and so doing that to generate these possibilities and then going through and saying yes or no on them would be a quick way to do it . postdoc b: , is this something that we could just co - opt , or is it ? postdoc b: right . thought if it was tried and true , then and he 's gone through additional levels of development . grad g: just output . although if you have some parameters like what 's a good window size for the median filter phd a: ! i have to remember . i 'll think about it , and try to remember . professor e: , i , it if we want to so , maybe we should move on to other things in limited time . postdoc b: can i ask one question about his statistics ? so so in the tw twelve minutes , if we took three hundred and divided it by four , which is about the length of twelve minutes , i , i 'd expect like there should be seventy - five overlaps . did you find more than seventy - five overlaps in that period , or ? phd d: , not @ i onl - only i transcribe only twelve minutes from the but i do n't co i do n't count the overlap . phd d: i consider i the the nnn the the three hundred is considered only you your transcription . i have to finish transcribing . so . grad g: i b i bet they 're more , because the beginning of the meeting had a lot more overlaps than the middle . middle or end . grad g: because i we 're dealing with the , in the early meetings , we 're recording while we 're saying who 's talking on what microphone , and things like that , and that seems to be a lot of overlap . postdoc b: it 's an empirical question . we could find that out . i ' m not that the beginning had more . professor e: so so i was gon na ask , i about any other things that either of you wanted to talk about , especially since andreas is leaving in five minutes , that you wanna go with . phd c: can ask about the data , like very straightforward question is where we are on the amount of data and the amount of transcribed data , just cuz i ' m i wanted to get a feel for that to be able to can be done first and like how many meetings are we recording professor e: right so there 's this there 's this forty - five minute piece that jane transcribed . that piece was then sent to ibm so they could transcribe so we have some comparison point . then there 's a larger piece that 's been recorded and put on cd - rom and sent to ibm . right ? and then we . grad g: , i have n't sent them yet because i was having this problem with the missing files . phd c: - . and we 're recording only this meeting , like continuously we 're only recording this one now ? or ? professor e: no . no , so the that 's the biggest one , chunk so far , professor e: , they do . w and we talked to them about recording some more and we 're going to , we ' ve started having a morning meeting , today i starting a w a week or two ago , on the front - end issues , and we 're recording those , there 's a network services and applications group here who 's to have their meetings recorded , professor e: and we 're gon na start recording them . they 're they meet on tuesdays . we 're gon na start recording them next week . so actually , we 're gon na h start having a pretty significant chunk and so , adam 's struggling with trying to get things to be less buggy , and come up quicker when they do crash and things like that , now that the things are starting to happen . so right now , i th i 'd say the data is predominantly meeting meetings , but there are scattered other meetings in it and that amount is gon na grow so that the meeting meetings will probably ultimately i if we 're if we collect fifty or sixty hours , the meeting meetings it will probably be , twenty or thirty percent of it , not eighty or ninety . professor e: and th the other thing is i ' m not pos i ' m thinking as we ' ve been through this a few times , that i really maybe you wanna do it once for the novelty , but i if in general we wanna have meetings that we record from outside this group do the digits . because it 's just an added bunch of weird . and , we h we 're highly motivated . , the morning group is really motivated cuz they 're working on connected digits , grad g: actually that 's something i wanted to ask , is i have a bunch of scripts to help with the transcription of the digits . we do n't have to hand - transcribe the digits because we 're reading them and i have those . and so i have some scripts that let you very quickly extract the sections of each utterance . but i have n't been ru i have n't been doing that . , if i did that , is someone gon na be working on it ? professor e: , whoever we have working on the acoustics for the meeting recorder are gon na start with that . grad g: ok . , i ' m interested in it , do n't have time to do it now . phd f: i was these meetings i ' m someone thought of this , but these this reading of the numbers would be extremely helpful to do adaptation . grad g: i would really like someone to do adaptation . so if we got someone interested in that , it would be great for meeting recorder . professor e: , one of the things i wanted to do , that i talked to don about , is one of the possible things he could do or m also , we could have someone else do it , is to do block echo cancellation , professor e: to try to get rid of some of the effects of the far - field effects . , we have the party line has been that echo cancellation is not the right way to handle the situation because people move around , and , if it 's not a simple echo , like a cross - talk echo , but it 's actually room acoustics , it 's you ca n't really do inversion , and even echo cancellation is going to be something it may you someone may be moving enough that you are not able to adapt quickly and so the tack that we ' ve taken is more " lets come up with feature approaches and multi - stream approaches and , that will be robust to it for the recognizer and not try to create a clean signal " . , that 's the party line . but it occurred to me a few months ago that party lines are always , dangerous . it 's good to test them , actually . and so we have n't had anybody try to do a good serious job on echo cancellation and we should know how that can do . so that 's something i 'd like somebody to do at some point , just take these digits , take the far - field mike signal , and the close mike signal , and apply really good echo cancellation . , there was a have been some talks recently by lucent on their b the block echo cancellation particularly appealed to me , trying and change it sample by sample , but you have some reasonable sized blocks . and , th phd a: w what is the artifact you try to you 're trying to get rid of when you do that ? professor e: so it 's it you have a direct , what 's the difference in if you were trying to construct a linear filter , that would professor e: that would subtract off the parts of the signal that were the aspects of the signal that were different between the close - talk and the distant . , so i in most echo cancellation , so you given that , so you 're trying to so you 'd there 's a distance between the close and the distant mikes so there 's a time delay there , and after the time delay , there 's these various reflections . and if you figure out what 's the there 's a least squares algorithm that adjusts itself adjusts the weight so that you try to subtract essentially to subtract off different reflections . right ? so let 's take the simple case where you just had you had some delay in a satellite connection and then there 's a there 's an echo . it comes back . and you want to adjust this filter so that it will maximally reduce the effect of this echo . phd a: so that would mean like if you were listening to the data that was recorded on one of those . , just the raw data , you would you might hear an echo ? and and then this noise cancellation would get professor e: , i ' m saying that 's a simplified version of what 's really happening . what 's really happening is , when i ' m talking to you right now , you 're getting the direct sound from my speech , but you 're also getting , the indirect sound that 's bounced around the room a number of times . ok ? so now , if you try to r you to completely remove the effect of that is impractical for a number of technical reasons , but i but not to try to completely remove it , that is , invert the room response , but just to try to eliminate some of the effect of some of the echos . , a number of people have done this so that , say , if you 're talking to a speakerphone , it makes it more like it would be , if you were talking right up to it . so this is the st the straight - forward approach . you say i want to use this item but i want to subtract off various kinds of echos . so you construct a filter , and you have this filtered version of the speech gets subtracted off from the original speech . then you try to minimize the energy in some sense . and so with some constraints . professor e: so , echo cancelling is , commonly done in telephony , and it 's the obvious thing to do in this situation if you if , you 're gon na be talking some distance from a mike . phd a: when , i would have meetings with the folks in cambridge when i was at bbn over the phone , they had a some a special speaker phone and when they would first connect me , it would come on and we 'd hear all this noise . and then it was and then it would come on and it was very clear , professor e: right . so it 's taking samples , it 's doing adaptation , it 's adjusting weights , and then it 's getting the sum . so , anyway that 's a reasonable thing that i 'd like to have somebody try somebody look and and the digits would be a reasonable thing to do that with . that 'd be enough data plenty of data to do that with , and i for that task you would n't care whether it was large vocabulary speech or anything . postdoc b: is brian kingsbury 's work related to that , or is it a different type of reverberation ? professor e: brian 's kingsbury 's work is an example of what we did f from the opposite dogma . which is what i was calling the " party line " , which is that doing that thing is not really what we want . we want something more flexible , i where people might change their position , and there might be , there 's also noise . so the echo cancellation does not really allow for noise . it 's if you have a clean situation but you just have some delays , then we 'll figure out the right set of weights for your taps for your filter in order to produce the effect of those echos . but if there 's noise , then the very signal that it 's looking at is corrupted so that it 's decision about what the right , right delays are is , is right delayed signal is incorrect . and so , in a noisy situation , also in a situation that 's very reverberant with long reverberation times and really long delays , it 's typically impractical . so for those reasons , and also a c a complete inversion , if you actually i mentioned that it 's hard to really do the inversion of the room acoustics . , that 's difficult because often times the system transfer function is such that when it 's inverted you get something that 's unstable , and so , if you do your estimate of what the system is , and then you try to invert it , you get a filter that actually , rings , and goes to infinity . so it 's so there 's that technical reason , and the fact that things move , and there 's air currents there 's all sorts of reasons why it 's not really practical . so for all those kinds of reasons , we , concluded we did n't want to in do inversion , and we 're even pretty skeptical of echo cancellation , which is n't really inversion , and we decided to do this approach of taking , just picking features , which were will give you more something that was more stable , in the presence of , or absence of , room reverberation , and that 's what brian was trying to do . so , let me just say a couple things that i was gon na bring up . let 's see . i you actually already said this thing about the consent forms , which was that we now do n't have to so this was the human subjects folks who said this , or that ? postdoc b: the a , we 're gon na do a revised form , . but once a person has signed it once , then that 's valid for a certain number of meetings . she wanted me to actually estimate how many meetings and put that on the consent form . i told her that would be a little bit difficult to say . so from a s practical standpoint , maybe we could have them do it once every ten meetings , . it wo n't be that many people who do it that often , but just , so long as they do n't forget that they ' ve done it , i . professor e: , back on the data thing , so there 's this one hour , ten hour , a hundred hour thing that we have . we have we have an hour that is transcribed , we have twelve hours that 's recorded but not transcribed , and at the rate we 're going , by the end of the semester we 'll have , i , forty or fifty , if we if this really , do we have that much ? professor e: eight weeks times three hours is twenty - four , so that 's , so like thirty hours ? phd c: , is there i know this sounds tough but we ' ve got the room set up . i was starting to think of some projects where you would use , similar to what we talked about with energy detection on the close - talking mikes . there are a number of interesting questions that you can ask about how interactions happen in a meeting , that do n't require any transcription . so what are the patterns , the energy patterns over the meeting ? and i ' m really interested in this but we do n't have a whole lot of data . so i was thinking , we ' ve got the room set up and you can always think of , also for political reasons , if icsi collected , two hundred hours , that looks different than forty hours , even if we do n't transcribe it ourselves , professor e: but i do n't think we 're gon na stop at the end of this semester . professor e: so , i th that if we are able to keep that up for a few months , we are gon na have more like a hundred hours . phd c: , is there are there any other meetings here that we can record , especially meetings that have some conflict in them or some deci , that are less i do n't , that have some more emotional aspects to them , or strong phd c: there 's laughter , i ' m talking more about strong differences of opinion meetings , maybe with manager types , or grad g: , that 's a good idea . that 's that would be a good match . professor e: . so i , i 'd mentioned to adam , and that was another thing i was gon na talk , mention to them before that there 's it it oc it occurred to me that we might be able to get some additional data by talking to acquaintances in local broadcast media . because , we had talked before about the problem about using found data , that it 's just set up however they have it set up and we do n't have any say about it and it 's typically one microphone , in a , or and so it does n't really give us the characteristics we want . and so i do think we 're gon na continue recording here and record what we can . but , it did occur to me that we could go to friends in broadcast media and say " hey you have this panel show , or this , this discussion show , and can you record multi - channel ? " and they may be willing to record it with professor e: , they probably already use lapel , but they might be able to have it would n't be that weird for them to have another mike that was somewhat distant . it would n't be exactly this setup , but it would be that thing , and what we were gon na get from uw , assuming they start recording , is n't als also is not going to be this exact setup . phd c: right . no , that 'd be great , if we can get more data . professor e: so , i was thinking of looking into that . the other thing that occurred to me after we had that discussion , is that it 's even possible , since , many radio shows are not live , that we could invite them to have like some of their record some of their shows here . phd c: or , they 're not as averse to wearing one of these head - mount , they 're on the radio , phd c: so . , that 'd be fantastic cuz those kinds of panels and those have interesting th - that 's an a side of style a style that we 're not collecting here , it 'd be great . professor e: and and the , the other side to it was the what which is where we were coming from i 'll talk to you more about it later is that there 's the radio stations and television stations already have worked out presumably , related to , legal issues and permissions and all that . , they already do what they do whatever they do . so it 's , it 's so it 's so it 's another source . so it 's something we should look into , we 'll collect what we collect here hopefully they will collect more at uw also and maybe we have this other source . but that it 's not unreasonable to aim at getting , significantly in excess of a hundred hours . , that was our goal . the thing was , i was hoping that we could @ in the under this controlled situation we could at least collect , thirty to fifty hours . and at the rate we 're going we 'll get pretty close to that this semester . and if we continue to collect some next semester , we should , phd c: i was mostly trying to think , " ok , if you start a project , within say a month , how much data do you have to work with . and you wanna s you wanna fr freeze your data for awhile so right now and we do n't have the transcripts back yet from ibm do , do we now ? professor e: , we do n't even have it for this f , forty - five minutes , that was phd c: so , not complaining , i was just trying to think , what kinds of projects can you do now versus six months from now and they 're pretty different , because professor e: . so i was thinking right now it 's this exploratory where you look at the data , you use some primitive measures and get a feeling for what the scatter plots look like , professor e: and and meanwhile we collect , and it 's more like , three months from now , or six months from now you can do a lot of other things . phd c: cuz i ' m not actually , just logistically that spend , i do n't wanna charge the time that i have on the project too early , before there 's enough data to make good use of the time . and that 's and especially with the student this guy who seems anyway , i should n't say too much , but if someone came that was great and wanted to do some real work and they have to end by the end of this school year in the spring , how much data will i have to work with , with that person . and so it 's professor e: i , so i would think , exploratory things now . , three months from now , the transcriptions are a bit of an unknown cuz we have n't gotten those back yet as far as the timing , but as far as the collection , it does n't seem to me l like , unreasonable to say that in january , ro roughly which is roughly three months from now , we should have at least something like , twenty - five , thirty hours . postdoc b: , we need to that there 's a possibility that the transcript will need to be adjusted afterwards , postdoc b: and es especially since these people wo n't be used to dealing with multi - channel transcriptions . so that we 'll need to adjust some and also if we wanna add things like , more refined coding of overlaps , then definitely we should count on having an extra pass through . i wanted to ask another a aspect of the data collection . there 'd be no reason why a person could n't get together several , friends , and come and argue about a topic if they wanted to , right ? professor e: if they really have something they wanna talk about as opposed to something @ , what we 're trying to stay away from was artificial constructions , but if it 's a real why not ? phd c: or just if you 're if you ha if there are meetings here that happen that we can record even if we do n't have them do the digits , or maybe have them do a shorter digit thing like if it was , , one string of digits , they 'd probably be willing to do . phd c: then , having the data is very valuable , cuz it 's politically better for us to say we have this many hours of audio data , especially with the itr , if we put in a proposal on it . it 'll just look like icsi 's collected a lot more audio data . , whether it 's transcribed or not , is another issue , but there 's there are research questions you can answer without the transcriptions , or at least that you can start to answer . postdoc b: it seems like you could hold some meetings . , you and maybe adam ? you you could maybe hold some additional meetings , if you wanted . phd a: would it help , we 're already talking about two levels of detail in meetings . one is without doing the digits or , i the full - blown one is where you do the digits , and everything , and then talk about doing it without digits , what if we had another level , just to collect data , which is without the headsets and we just did the table - mounted . grad g: it seems like it 's a big part of this corpus is to have the close - talking mikes . phd c: or at least , like , me personally ? i would i could n't use that data . postdoc b: i agree . and mari also , we had this came up when she was here . that 's important . professor e: i b by the , i do n't think the transcriptions are actually , in the long run , such a big bottleneck . phd c: and if it were true than i would just do that , but it 's not that bad like the room is not the bottleneck , and we have enough time in the room , it 's getting the people to come in and put on the and get the setup going . professor e: the issue is just that we 're blazing that path . and and d do you have any idea when the you 'll be able to send the ten hours to them ? grad g: , i ' ve been burning two c ds a day , which is about all do with the time i have . so it 'll be early next week . professor e: so early next week we send it to them , and then we check with them to see if they ' ve got it and we start , asking about the timing for it . so once they get it sorted out about how they 're gon na do it , which they 're pretty along on , cuz they were able to read the files and so on . professor e: , but , so they have , they 're volunteering their time and they have a lot of other things to do , professor e: right ? but they but at any rate , they 'll once they get that sorted out , they 're making cassettes there , then they 're handing it to someone who they who 's who is doing it , and it 's not going to be i do n't think it 's going to be that much more of a deal for them to do thirty hours then to do one hour , . it 's not going to be thirty phd c: , i , if there 's any way without too much more overhead , even if we do n't ship it right away to ibm even if we just collect it here for awhile , to record , two or three more meeting a week , just to have the data , even if they 're not doing the digits , but they do wear the headphones ? phd c: no , i meant , , the meetings where people eat their lunch downstairs , maybe they do n't wanna be recorded , but grad g: , the problem with that is i would feel a little constrained to ? , some of the meetings grad g: , our " soccer ball " meeting ? i none of you were there for our soccer ball meeting . phd c: alright , so i 'll just throw it out there , if anyone knows of one more m or two more wee meetings per week that happen at icsi , that we could record , it would be worth it . professor e: . , we should also check with mari again , because they were really intending , maybe just did n't happen , but they were really intending to be duplicating this in some level . so then that would double what we had . and there 's a lot of different meetings at uw really m a lot more than we have here right cuz we 're not right on campus , so . phd a: is the , notion of recording any of chuck 's meetings dead in the water , or is that still a possibility ? professor e: , they seem to have some problems with it . we can we can talk about that later . , but , again , jerry is jerry 's open so , we have two speech meetings , one network meeting , jerry was open to it but i s one of the things that is a little bit of a limitation , there is a think when the people are not involved in our work , we probably ca n't do it every week . ? i that people are gon na feel a little bit constrained . now , it might get a little better if we do n't have them do the digits all the time . and the then so then they can just really try to put the mikes on and then just charge in and phd c: what if we give people , we cater a lunch in exchange for them having their meeting here ? postdoc b: , i do think eating while you 're doing a meeting is going to be increasing the noise . but i had another question , which is , in principle , w , i know that you do n't want artificial topics , postdoc b: but it does seem to me that we might be able to get subjects from campus to come down and do something that would n't be too artificial . , we could political discussions , or other , postdoc b: and i , people who are because , there 's also this constraint . we d it 's like , the goldibears goldi goldilocks , it 's like you do n't want meetings that are too large , but you do n't want meetings that are too small . and a and it just seems like maybe we could exploit the subj human subject p pool , in the positive sense of the word . grad g: really doubt that any of the state of california meetings would be recordable and then releasable to the general public . so i talked with some people at the haas business school who are i who are interested in speech recognition grad g: and , they hummed and hawed and said " maybe we could have meetings down here " , but then i got email from them that said " no , we decided we 're not really interested and we do n't wanna come down and hold meetings . " so , it 's gon na be a problem to get people regularly . professor e: but but we c but , we get some scattered things from this and that . and i d i do think that maybe we can get somewhere with the radio . i i have better contacts in radio than in television , but phd c: , and they 're already they 're these things are already recorded , we do n't have to ask them to even and i ' m not wh how they record it , but they must record from individual professor e: n no , i ' m not talking about ones that are already recorded . i ' m talking about new ones phd c: , we can find out . i know mark liberman was interested in ldc getting data , and professor e: right , that 's the found data idea . but what i ' m saying is if i talk to people that i know who do these th who produce these things we could ask them if they could record an extra channel , let 's say , of a distant mike . and u routinely they would not do this . so , since i ' m interested in the distant mike , i wanna make that there is at least that somewhere professor e: but if we ask them to do that they might be intrigued enough by the idea that they might be e willing to the i might be able to talk them into it . grad g: . we 're getting towards the end of our disk space , so we should think about trying to wrap up here . professor e: ok . i do n't why do n't we why d u why do n't we turn them turn grad g: ok , leave them on for a moment until i turn this off , cuz that 's when it crashed last time .
topics discussed by the berkeley meeting recorder group included a potential collaboration with another icsi member regarding the analysis of inference structures , efforts by speaker mn005 to detect speaker overlap , the current status on recordings and transcriptions , and future efforts to collect meeting data. in addition to weekly meetings by the bmr group , efforts are in progress to record meetings by other icsi research groups , as well as routine discussions by non-icsi members. the group will resume its discussion of speaker anonymization in a subsequent meeting. to save time , speaker mn005 will only mark the sample of transcribed data for regions of overlapping speech , as opposed to marking all acoustic events. the digits extraction task will be delegated to whomever is working on acoustics for the meeting recorder project. the group will inquire with other icsi researchers and colleagues at the university of washington about collecting additional meeting data. echo cancellation experiments will be performed on meeting recorder digits data. use of pronouns ( e.g . he , she , you ) during meetings is potentially problematic for indexing referents and analysing speech understanding. sensitivity issues involved in mentioning individuals by name were also discussed. data obtained via the close-talking microphone channels don't match meeting content yielded via the far-field microphones. the collection of meeting data from non-meeting recorder participants is likely to be more infrequent. a cursory review of a subset of meeting recorder data is in progress to determine whether it is suitable material for a collaboration with another icsi researcher interested in analyzing inference structures. efforts by speaker mn005 are in progress to detect overlapping speech. for a single transcribed meeting , speaker mn005 reported approximately 300 cases of overlap. future work will involve manually deriving time marks from sections of overlapping speech for the same meeting , and then experimenting with different measures , e.g . energy increase , to determine a set of acoustically salient features for identifying speaker overlap. approximately 12-13 hours of meeting recorder data have been collected , roughly 45 minutes of which have been transcribed. additional meetings by other icsi research groups will be recorded. a suggestion was made that multi-channel data also be collected in cooperation with local media broadcasters , and that such events might be recorded live from icsi. the group aims to collect over 100 hours of meeting recorder data in total. speaker consent forms are being revised. it was suggested that subjects should sign a new consent form after 10 recording sessions.
###dialogue: grad g: so maybe what 's causing it to crash is i keep starting it and then stopping it to see if it 's working . and so starting it and then stopping it and starting it again causes it to crash . so , i wo n't do that anymore . postdoc b: and it looks like you ' ve found a way of mapping the location to the without having people have to give their names each time ? grad g: so they should be right with what 's on the digit forms . ok , so i 'll go ahead and start with digits . u and i should say that , you just pau you just read each line an and then pause briefly . professor e: so , you see , don , the unbridled excitement of the work that we have on this project . grad g: but that would be a good thing to add . after printed out a zillion of them . professor e: , that 's , so i do have a an agenda suggestion . , we the things that we talk about in this meeting tend to be a mixture of procedural mundane things and research points and i was thinking it was a meeting a couple of weeks ago that we spent much of the time talking about the mundane cuz that 's easier to get out of the way and then we drifted into the research and maybe five minutes into that andreas had to leave . so i ' m suggesting we turn it around and we have anybody has some mundane points that we could send an email later , hold them for a bit , and let 's talk about the research - y things . , so the one th one thing i know that we have on that is we had talked a couple weeks before about the you were doing with l attempting to locate events , we had a little go around trying to figure out what you meant by " events " but , what we had meant by " events " i was points of overlap between speakers . but i th i gather from our discussion a little earlier today that you also mean interruptions with something else like some other noise . professor e: and then the other thing would be it might be to have a preliminary discussion of some of the other research areas that we 're thinking about doing . , especially since you have n't been in these meetings for a little bit , maybe you have some discussion of some of the p the plausible things to look at now that we 're starting to get data , and one of the things i know that also came up is some discussions that jane had with lokendra about some work about i d i do n't want to try to say cuz i 'll say it wrong , but anyway some potential collaboration there about the working with these data . so , . professor e: . , i if we if this is like everybody has something to contribute thing , there 's just a couple people primarily but , wh why do n't actually that last one said we could do fairly quickly so why do n't you start with that . postdoc b: , so , he was interested in the question of , relating to his to the research he presented recently , of inference structures , and , the need to build in , this mechanism for understanding of language . and he gave the example in his talk about how , e a i ' m remembering it just off the top of my head right now , but it 's something about how , i " joe slipped " , " john had washed the floor " like that . and i do n't have it quite right , but that thing , where you have to draw the inference that , ok , there 's this time sequence , but also the causal aspects of the floor and how it might have been the of the fall and that it was the other person who fell than the one who cleaned it and it these sorts of things . so , i looked through the transcript that we have so far , and , fou identified a couple different types of things of that type and , one of them was something like , during the course of the transcript , w we had gone through the part where everyone said which channel they were on and which device they were on , and , the question was raised " , should we restart the recording at this point ? " and and dan ellis said , " , we 're just so far ahead of the game right now we really do n't need to " . now , how would you interpret that without a lot of inference ? so , the inferences that are involved are things like , ok , so , how do you interpret " ahead of the game " ? . so it 's the it 's i what you what you int what you draw , the conclusions that you need to draw are that space is involved in recording , postdoc b: that , i that i we have enough space , and he continues , like " we 're so ahead of the game cuz now we have built - in downsampling " . so you have to get the idea that , " ahead of the game " is sp speaking with respect to space limitations , that downsampling is gaining us enough space , and that therefore we can keep the recording we ' ve done so far . but there are a lot of different things like that . grad g: so , do you think his interest is in using this as a data source , or training material , or what ? professor e: , i should maybe interject to say this started off with a discussion that i had with him , so we were trying to think of ways that his interests could interact with ours professor e: and that if we were going to project into the future when we had a lot of data , and such things might be useful for that in or before we invested too much effort into that he should , with jane 's help , look into some of the data that we 're already have and see , is there anything to this ? is there any point which you think that , you could gain some advantage and some potential use for it . cuz it could be that you 'd look through it and you say " , this is just the wrong task for him to pursue his " professor e: and and i got the impression from your mail that there was enough things like this just in the little sample that you looked at that it 's plausible at least . postdoc b: it 's possible . , he was he we met and he was gon na go and , y look through them more systematically and then meet again . so it 's , not a matter of a but , i think it was optimistic . professor e: so anyway , that 's e a quite different thing from anything we ' ve talked about that , might come out from some of this . postdoc b: that 's his major i mentioned several that w had to do with implications drawn from intonational contours postdoc b: and that was n't as directly relevant to what he 's doing . he 's interested in these knowledge structures , professor e: , he certainly could use text , but we were looking to see if there is there something in common between our interest in meetings and his interest in this . grad g: and i imagine that transcripts of speech text that is speech probably has more of those than prepared writing . i whether it would or not , but it seems like it would . postdoc b: , i do n't think i would make that leap , because i in narratives , if you spell out everything in a narrative , it can be really tedious , so . grad g: , i ' m just thinking , when you 're face to face , you have a lot of backchannel and and grad g: and so it 's just easier to do that broad inference jumping if it 's face to face . , so , if read that dan was saying " we 're ahead of the game " in that context , i might not realize that he was talking about disk space as opposed to anything else . postdoc b: i , i had several that had to do with backchannels and this was n't one of them . this this one really does m make you leap from so he said , " we 're ahead of the game , w we have built - in downsampling " . and the inference , i if you had it written down , would be postdoc b: - . but there are others that have backchannelling , it 's just he was less interested in those . phd f: can i to interrupt . , i f i ' ve @ d a minute , several minutes ago , i , like , briefly was not listening and so who is " he " in this context ? phd f: ok . so i was just realizing we ' ve you guys have been talking about " he " for at least , i , three four minutes without ever mentioning the person 's name again . phd c: i believe it . actually to make it worse , morgan uses " you " and " you " phd f: so this is gon na be a big , big problem if you want to later do , indexing , or speech understanding of any sort . phd c: with gaze and no identification , or wrote this down . , actually . cuz morgan will say , " you had some ideas " phd c: because , because he 's looking at the per even for addressees in the conversation , phd c: i bet you could pick that up in the acoustics . just because your gaze is also correlated with the directionality of your voice . phd c: , so that , to even know when , if you have the p z ms you should be able to pick up what a person is looking at from their voice . grad g: , especially with morgan , with the way we have the microphones arranged . i ' m right on axis and it would be very hard to tell . grad g: , but if i ' m talking like this ? right now i ' m looking at jane and talking , now i ' m looking at chuck and talking , i do n't think the microphones would pick up that difference . grad g: so if i ' m talking at you , or i ' m talking at you . professor e: i probably been affect no , i th i ' ve been affected by too many conversations where we were talking about lawyers and talking about and concerns about " gee is somebody going to say something bad ? " and so on . professor e: and so i ' m i ' m tending to stay away from people 's names even though phd c: even though you could pick up later on , just from the acoustics who you were t who you were looking at . professor e: no no , it is n't sensitive . i was just i was overreacting just because we ' ve been talking about it . postdoc b: i came up with something from the human subjects people that i wanted to mention . , it fits into the m area of the mundane , but they did say , i asked her very specifically about this clause of how , , it says " no individuals will be identified , " in any publication using the data . " ok , individuals being identified , let 's say you have a snippet that says , " joe s thinks such - and - such about this field , but he 's wrongheaded . " now , we 're gon na be careful not to have the " wrongheaded " part in there , but , let 's say we say , " joe used to - and - so about this area , in his publication he says that but he 's changed his mind . " or whatever . then the issue of being able to trace joe , because we know he 's - known in this field , and all this and tie it to the speaker , whose name was just mentioned a moment ago , can be sensitive . postdoc b: so it 's really adaptive and wise to not mention names any more than we have to because if there 's a slanderous aspect to it , then how much to we wanna be able to have to remove ? professor e: , there 's that . but also to some extent it 's just educating the human subjects people , in a way , because there 's if , there 's court transcripts , there 's transcripts of radio shows people say people 's names all the time . so it ca n't be bad to say people 's names . it 's just that i you 're right that there 's more poten if we never say anybody 's name , then there 's no chance of slandering anybody , phd c: . we should do whatever 's natural in a meeting if we were n't being recorded . postdoc b: , my feeling on it was that it was n't really important who said it , phd f: , if you ha since you have to go over the transcripts later anyway , you could make it one of the jobs of the people who do that to mark grad g: , we t we talked about this during the anon anonymization . if we wanna go through and extract from the audio and the written every time someone says a name . and that our conclusion was that we did n't want to do that . professor e: , we really ca n't . but a actually , i ' m . i really would like to push finish this off . postdoc b: i understand . no i just was suggesting that it 's not a bad policy p potentially . professor e: , i di i did n't intend it an a policy though . it was it was just unconscious , semi - conscious behavior . i sorta knew i was doing it but it was phd c: no , you have to say , you still who " he " is , with that prosody . professor e: , we were talking about dan at one point and we were talking about lokendra at another point . professor e: , you could do all these inferences , i would like to move it into what jose has been doing because he 's actually been doing something . phd d: - ok . i remind that me my first objective , in the project is to study difference parameters to find a good solution to detect , the overlapping zone in speech recorded . but , tsk , ehhh in that way i begin to study and to analyze the ehn the recorded speech the different session to find and to locate and to mark the different overlapping zone . and so i was i am transcribing the first session and i have found , one thousand acoustic events , besides the overlapping zones , i the breaths aspiration , talk , clap , i what is the different names you use to name the n speech grad g: , i do n't think we ' ve been doing it at that level of detail . so . phd d: , i do i do n't need to mmm to m to label the different acoustic , but i prefer because i would like to study if , i will find , a good parameters to detect overlapping i would like to test these parameters with the another , acoustic events , to nnn to find what is the ehm the false , the false hypothesis , nnn , which are produced when we use the ehm this parameter pitch , difference , feature phd a: some of these that are the nonspeech overlapping events may be difficult even for humans to tell that there 's two there . phd a: , if it 's a tapping sound , you would n't necessarily or , something like that , it 'd be it might be hard to know that it was two separate events . phd d: but my objective will be to study overlapping zone . ? n in twelve minutes i found , one thousand acoustic events . professor e: how many overlaps were there in it ? no no , how many of them were the overlaps of speech , though ? postdoc b: does this ? so if you had an overlap involving three people , how many times was that counted ? phd d: , three people , two people . , i would like to consider one people with difference noise in the background , be professor e: no no , but what she 's asking is if at some particular for some particular stretch you had three people talking , instead of two , did you call that one event ? phd d: i consider one event for th for that for all the zone . this th i con i consider an acoustic event , the overlapping zone , the period where three speaker or are talking together . grad g: so let 's say me and jane are talking at the same time , and then liz starts talking also over all of us . how many events would that be ? phd d: no . no no . for me is the overlapping zone , because you have s you have more one , more one voice , produced in a moment . professor e: so then , in the region between since there is some continuous region , in between regions where there is only one person speaking . and one contiguous region like that you 're calling an event . is it are you calling the beginning or the end of it the event , or are you calling the entire length of it the event ? phd d: i consider the , nnn the nnn , the entirety , all the time there were the voice has overlapped . this is the idea . but i do n't distinguish between the numbers of speaker . , i ' m not considering the ehm , the fact of , what did you say ? at first , two talkers are , speaking , and , a third person join to that . for me , it 's , all overlap zone , with several numbers of speakers is , the same acoustic event . wi - but , without any mark between the zone of the overlapping zone with two speakers speaking together , and the zone with the three speakers . phd d: it one . , with , a beginning mark and the ending mark . because for me , is the zone with some distortion the spectral . grad g: , but but you could imagine that three people talking has a different spectral characteristic than two . phd d: i do n't , but i have to study . what will happen in a general way , professor e: so again , that 's three hundred in forty - five minutes that are speakers , just speakers . postdoc b: , but a thousand taps in eight minutes is a l in twelve minutes is a lot . phd d: , silent , ground to bec to detect because i consider acoustic event all the things are not speech . in ge in a general point of view . phd d: silent , i do n't i have n't the i would like to do a stylistic study and give you with the report from the study from the session one session . and i found that another thing . when i w i was look at nnn , the difference speech file , , if we use the ehm the mixed file , to transcribe , the events and the words , i saw that the speech signal , collected by the this mike of this mike , are different from the mixed signal , we collected by headphone . and it 's right . phd d: but the problem is the following . the the i knew that the signal , would be different , but the problem is , we detected difference events in the speech file collected by that mike qui compared with the mixed file . and so if when you transcribe only using the nnn the mixed file , it 's possible if you use the transcription to evaluate a different system , it 's possible you in the i and you use the speech file collected by the fet mike , to nnn to do the experiments with the system , its possible to evaluate , or to consider acoustic events that which you marked in the mixed file , but they do n't appear in the speech signal collected by the mike . grad g: the the reason that i generated the mixed file was for ibm to do word level transcription , not speech event transcription . grad g: so i agree that if someone wants to do speech event transcription , that the mixed signals here , if i ' m tapping on the table , you it 's not gon na show up on any of the mikes , but it 's gon na show up rather loudly in the pzm . phd d: so and i say that , or this only because i c i in my opinion , it 's necessary to put the transcription on the speech file , collected by the objective signal . the signal collected by the , the real mike in the future , in the prototype to correct the initial segmentation with the real speech professor e: , just in that one s ten second , or whatever it was , example that adam had that we passed on to others a few months ago , there was that business where i g i it was adam and jane were talking at the same time and , in the close - talking mikes you could n't hear the overlap , and in the distant mike you could . so , it 's clear that if you wanna study if you wanna find all the places where there were overlap , it 's probably better to use a distant mike . professor e: on the other hand , there 's other phenomena that are going on at the same time for which it might be useful to look at the close - talking mikes , phd c: but why ca n't you use the combination of the close - talking mikes , time aligned ? grad g: if you use the combination of the close - talking mikes , you would hear jane interrupting me , but you would n't hear the paper rustling . and so if you 're interested in phd c: if you 're interested in speakers overlapping other speakers and not the other kinds of nonspeech , that 's not a problem , grad g: although the other issue is that the mixed close - talking mikes , i ' m doing weird normalizations and things like that . phd c: but it 's known . , the normalization you do is over the whole conversation is n't it , over the whole meeting . so if you wanted to study people overlapping people , that 's not a problem . phd d: i saw the nnn the but i have any results . i saw the speech file collected by the fet mike , and signal to noise relation is low . it 's very low . you would comp if we compare it with the headphone . and i found that nnn that , ehm , pr probably , phd d: i ' m not by the moment , but it 's probably that a lot of , in the overlapping zone , on in several parts of the files where you can find , smooth speech from one talker in the meeting , it 's probably in that in those files you can not find you can not process because it 's confused with noise . and there are a lot of but i have to study with more detail . but my idea is to process only nnn , this s of speech . because it 's more realistic . i ' m not it 's a good idea , but professor e: , it 'd be hard , but on the other hand as you point out , if your if i if your concern is to get the overlapping people 's speech , you will get that somewhat better . , are you making any use you were working with th the data that had already been transcribed . professor e: does it yes . now did you make any use of that ? see i was wondering cuz we st we have these ten hours of other that is not yet transcribed . do you phd d: the the transcription by jane , t i , i want to use to nnn , to put i it 's a reference for me . but the transcription , i do n't i ' m not interested in the words , transcription words , transcribed in follow in the in the speech file , but jane put a mark at the beginning of each talker , in the meeting , she nnn includes information about the zone where there are there is an overlapping zone . but there is n't any mark , time temporal mark , to c to mmm e - heh , to label the beginning and the end of the professor e: so the twelve you it took you twelve hours this included maybe some time where you were learning about what you wanted to do , but , it took you something like twelve hours to mark the forty - five minutes , your phd d: tw - twelve hours of work to segment and label twelve minutes from a session of part of f professor e: so let me back up again . so the when you said there were three hundred speaker overlaps , that 's in twelve minutes ? phd d: no no . i consider all the session because i count the nnn the overlappings marked by jane , professor e: so it 's three hundred in forty - five minutes , but you have time , marked twelve minute the overlaps in twelve minutes of it . phd f: so , can i ask whether you found , how accurate jane 's labels were as far as phd d: but , by the moment , i do n't compare , my temporal mark with jane , but i want to do it . because i per perhaps i have errors in the marks , i and if i compare with jane , it 's probably i correct and to get a more accurately transcription in the file . grad g: , also jane was doing word level . so we were n't concerned with exactly when an overlap started and stopped . phd c: , you did n't need to show the exact point of interruption , you just were showing at the level of the phrase or the level of the speech spurt , or postdoc b: , b , i would say time bin . so my goal is to get words with reference to a time bin , beginning and end point . and and sometimes , it was like you could have an overlap where someone said something in the middle , but , w it just was n't important for our purposes to have it that i disrupt that unit in order to have , a the words in the order in which they were spoken , it would have been hard with the interface that we have . now , my a adam 's working on a , on a revised overlapping interface , professor e: , i but i have a suggestion about that . , this is very , very time - consuming , and you 're finding lots of things which i ' m are gon na be very interesting , but in the interests of making progress , might i s how would it affect your time if you only marked speaker overlaps ? professor e: do not mark any other events , but only mark speaker do you think that would speed it up quite a bit ? professor e: do y do you think that would speed it up ? , speed up your marking ? professor e: , and my question is , if you did that , if you followed my suggestion , would it take much less time ? then it 's a good idea . then it 's a good idea , because it phd d: because i need a lot of time to put the label or to do that . professor e: there 's there 's continual noise from fans and , and there is more impulsive noise from taps and something in between with paper rustling . we know that all that 's there and it 's a g worthwhile thing to study , but it takes a lot of time to mark all of these things . whereas th i i would think that you we can study more or less as a distinct phenomenon the overlapping of people talking . so . then you can get the cuz you need if it 's three hundred i it sounds like you probably only have fifty or sixty or seventy events right now that are really and and you need to have a lot more than that to have any even visual sense of what 's going on , much less any reasonable statistics . phd c: now , why do you need to mark speaker overlap by hand if you can infer it from the relative energy in the professor e: , ok , . so let 's back up because you were n't here for an earlier conversation . professor e: so the idea was that what he was going to be doing was experimenting with different measures such as the increase in energy , such as the energy in the lpc residuals , such as there 's a bunch of things , increased energy is - is an obvious one . professor e: , and , it 's not obvious , you could do the dumbest thing and get it ninety percent of the time . but when you start going past that and trying to do better , it 's not obvious what combination of features is gon na give you the , the right detector . so the idea is to have some ground truth first . and so the i the idea of the manual marking was to say " ok this , i , it 's really here " . professor e: we we talked about that . s but so it 's a bootstrapping thing and , the idea was , i we thought it would be useful for him to look at the data anyway , and then whatever he could mark would be helpful , and we could it 's a question of what you bootstrap from . , do you bootstrap from a simple measurement which is right most of the time and then you g do better , or do you bootstrap from some human being looking at it and then do your simple measurements , from the close - talking mike . , even with the close - talking mike you 're not gon na get it right all the time . phd c: , that 's what i wonder , because or how bad it is , be , because that would be interesting phd c: especially because the bottleneck is the transcription . right ? , we ' ve got a lot more data than we have transcriptions for . we have the audio data , we have the close - talking mike , so it seems like one project that 's not perfect , but , that you can get the training data for pretty quickly is , if you infer form the close - talking mikes where the on - off points are of speech , grad g: and so but it 's doing something very , very simple . it just takes a threshold , based on the volume , phd f: or you can set the threshold low and then weed out the false alarms by hand . grad g: , and then it does a median filter , and then it looks for runs . and , it seems to work , i ' ve i ' m fiddling with the parameters , to get it to actually generate something , and i have n't i do n't what i ' m working on was getting it to a form where we can import it into the user interface that we have , into transcriber . and so i told i said it would take about a day . i ' ve worked on it for about half a day , grad g: so give me another half day and i we 'll have something we can play with . professor e: see , this is where we really need the meeting recorder query to be working , because we ' ve had these meetings and we ' ve had this discussion about this , and i ' m remembering a little bit about what we decided , but i could n't remember all of it . professor e: so , it was partly that , give somebody a chance to actually look at the data and see what these are like , partly that we have e some ground truth to compare against , when he gets his thing going , phd c: , it 's definitely good to have somebody look at it . i was just thinking as a way to speed up , the amount of postdoc b: was that there m there was this already a script i believe that dan had written , that handle bleedthrough , cuz you have this close you have contamination from other people who speak loudly . grad g: and i have n't tried using that . it would probably help the program that i ' m doing to first feed it through that . it 's a cross - correlation filter . so i have n't tried that , but that if it it might be something it might be a good way of cleaning it up a little . postdoc b: so , some thought of maybe having , having that be a preprocessor and then run it through yours . professor e: and we wanna see try the simple thing first , cuz you add this complex thing up afterwards that does something good y yo you wanna see what the simple thing does first . but , having somebody have some experience , again , with marking it from a human standpoint , we 're , i do n't expect jose to do it for f fifty hours of speech , but we if he could speed up what he was doing by just getting the speaker overlaps so that we had it , say , for forty - five minutes , then at least we 'd have three hundred examples of it . professor e: and when adam was doing his automatic thing he could then compare to that and see what it was different . phd a: , i did something almost identical to this at one of my previous jobs , and it works pretty . , i almost exactly what you described , an energy detector with a median filter , you look for runs . and , you can phd a: and so doing that to generate these possibilities and then going through and saying yes or no on them would be a quick way to do it . postdoc b: , is this something that we could just co - opt , or is it ? postdoc b: right . thought if it was tried and true , then and he 's gone through additional levels of development . grad g: just output . although if you have some parameters like what 's a good window size for the median filter phd a: ! i have to remember . i 'll think about it , and try to remember . professor e: , i , it if we want to so , maybe we should move on to other things in limited time . postdoc b: can i ask one question about his statistics ? so so in the tw twelve minutes , if we took three hundred and divided it by four , which is about the length of twelve minutes , i , i 'd expect like there should be seventy - five overlaps . did you find more than seventy - five overlaps in that period , or ? phd d: , not @ i onl - only i transcribe only twelve minutes from the but i do n't co i do n't count the overlap . phd d: i consider i the the nnn the the three hundred is considered only you your transcription . i have to finish transcribing . so . grad g: i b i bet they 're more , because the beginning of the meeting had a lot more overlaps than the middle . middle or end . grad g: because i we 're dealing with the , in the early meetings , we 're recording while we 're saying who 's talking on what microphone , and things like that , and that seems to be a lot of overlap . postdoc b: it 's an empirical question . we could find that out . i ' m not that the beginning had more . professor e: so so i was gon na ask , i about any other things that either of you wanted to talk about , especially since andreas is leaving in five minutes , that you wanna go with . phd c: can ask about the data , like very straightforward question is where we are on the amount of data and the amount of transcribed data , just cuz i ' m i wanted to get a feel for that to be able to can be done first and like how many meetings are we recording professor e: right so there 's this there 's this forty - five minute piece that jane transcribed . that piece was then sent to ibm so they could transcribe so we have some comparison point . then there 's a larger piece that 's been recorded and put on cd - rom and sent to ibm . right ? and then we . grad g: , i have n't sent them yet because i was having this problem with the missing files . phd c: - . and we 're recording only this meeting , like continuously we 're only recording this one now ? or ? professor e: no . no , so the that 's the biggest one , chunk so far , professor e: , they do . w and we talked to them about recording some more and we 're going to , we ' ve started having a morning meeting , today i starting a w a week or two ago , on the front - end issues , and we 're recording those , there 's a network services and applications group here who 's to have their meetings recorded , professor e: and we 're gon na start recording them . they 're they meet on tuesdays . we 're gon na start recording them next week . so actually , we 're gon na h start having a pretty significant chunk and so , adam 's struggling with trying to get things to be less buggy , and come up quicker when they do crash and things like that , now that the things are starting to happen . so right now , i th i 'd say the data is predominantly meeting meetings , but there are scattered other meetings in it and that amount is gon na grow so that the meeting meetings will probably ultimately i if we 're if we collect fifty or sixty hours , the meeting meetings it will probably be , twenty or thirty percent of it , not eighty or ninety . professor e: and th the other thing is i ' m not pos i ' m thinking as we ' ve been through this a few times , that i really maybe you wanna do it once for the novelty , but i if in general we wanna have meetings that we record from outside this group do the digits . because it 's just an added bunch of weird . and , we h we 're highly motivated . , the morning group is really motivated cuz they 're working on connected digits , grad g: actually that 's something i wanted to ask , is i have a bunch of scripts to help with the transcription of the digits . we do n't have to hand - transcribe the digits because we 're reading them and i have those . and so i have some scripts that let you very quickly extract the sections of each utterance . but i have n't been ru i have n't been doing that . , if i did that , is someone gon na be working on it ? professor e: , whoever we have working on the acoustics for the meeting recorder are gon na start with that . grad g: ok . , i ' m interested in it , do n't have time to do it now . phd f: i was these meetings i ' m someone thought of this , but these this reading of the numbers would be extremely helpful to do adaptation . grad g: i would really like someone to do adaptation . so if we got someone interested in that , it would be great for meeting recorder . professor e: , one of the things i wanted to do , that i talked to don about , is one of the possible things he could do or m also , we could have someone else do it , is to do block echo cancellation , professor e: to try to get rid of some of the effects of the far - field effects . , we have the party line has been that echo cancellation is not the right way to handle the situation because people move around , and , if it 's not a simple echo , like a cross - talk echo , but it 's actually room acoustics , it 's you ca n't really do inversion , and even echo cancellation is going to be something it may you someone may be moving enough that you are not able to adapt quickly and so the tack that we ' ve taken is more " lets come up with feature approaches and multi - stream approaches and , that will be robust to it for the recognizer and not try to create a clean signal " . , that 's the party line . but it occurred to me a few months ago that party lines are always , dangerous . it 's good to test them , actually . and so we have n't had anybody try to do a good serious job on echo cancellation and we should know how that can do . so that 's something i 'd like somebody to do at some point , just take these digits , take the far - field mike signal , and the close mike signal , and apply really good echo cancellation . , there was a have been some talks recently by lucent on their b the block echo cancellation particularly appealed to me , trying and change it sample by sample , but you have some reasonable sized blocks . and , th phd a: w what is the artifact you try to you 're trying to get rid of when you do that ? professor e: so it 's it you have a direct , what 's the difference in if you were trying to construct a linear filter , that would professor e: that would subtract off the parts of the signal that were the aspects of the signal that were different between the close - talk and the distant . , so i in most echo cancellation , so you given that , so you 're trying to so you 'd there 's a distance between the close and the distant mikes so there 's a time delay there , and after the time delay , there 's these various reflections . and if you figure out what 's the there 's a least squares algorithm that adjusts itself adjusts the weight so that you try to subtract essentially to subtract off different reflections . right ? so let 's take the simple case where you just had you had some delay in a satellite connection and then there 's a there 's an echo . it comes back . and you want to adjust this filter so that it will maximally reduce the effect of this echo . phd a: so that would mean like if you were listening to the data that was recorded on one of those . , just the raw data , you would you might hear an echo ? and and then this noise cancellation would get professor e: , i ' m saying that 's a simplified version of what 's really happening . what 's really happening is , when i ' m talking to you right now , you 're getting the direct sound from my speech , but you 're also getting , the indirect sound that 's bounced around the room a number of times . ok ? so now , if you try to r you to completely remove the effect of that is impractical for a number of technical reasons , but i but not to try to completely remove it , that is , invert the room response , but just to try to eliminate some of the effect of some of the echos . , a number of people have done this so that , say , if you 're talking to a speakerphone , it makes it more like it would be , if you were talking right up to it . so this is the st the straight - forward approach . you say i want to use this item but i want to subtract off various kinds of echos . so you construct a filter , and you have this filtered version of the speech gets subtracted off from the original speech . then you try to minimize the energy in some sense . and so with some constraints . professor e: so , echo cancelling is , commonly done in telephony , and it 's the obvious thing to do in this situation if you if , you 're gon na be talking some distance from a mike . phd a: when , i would have meetings with the folks in cambridge when i was at bbn over the phone , they had a some a special speaker phone and when they would first connect me , it would come on and we 'd hear all this noise . and then it was and then it would come on and it was very clear , professor e: right . so it 's taking samples , it 's doing adaptation , it 's adjusting weights , and then it 's getting the sum . so , anyway that 's a reasonable thing that i 'd like to have somebody try somebody look and and the digits would be a reasonable thing to do that with . that 'd be enough data plenty of data to do that with , and i for that task you would n't care whether it was large vocabulary speech or anything . postdoc b: is brian kingsbury 's work related to that , or is it a different type of reverberation ? professor e: brian 's kingsbury 's work is an example of what we did f from the opposite dogma . which is what i was calling the " party line " , which is that doing that thing is not really what we want . we want something more flexible , i where people might change their position , and there might be , there 's also noise . so the echo cancellation does not really allow for noise . it 's if you have a clean situation but you just have some delays , then we 'll figure out the right set of weights for your taps for your filter in order to produce the effect of those echos . but if there 's noise , then the very signal that it 's looking at is corrupted so that it 's decision about what the right , right delays are is , is right delayed signal is incorrect . and so , in a noisy situation , also in a situation that 's very reverberant with long reverberation times and really long delays , it 's typically impractical . so for those reasons , and also a c a complete inversion , if you actually i mentioned that it 's hard to really do the inversion of the room acoustics . , that 's difficult because often times the system transfer function is such that when it 's inverted you get something that 's unstable , and so , if you do your estimate of what the system is , and then you try to invert it , you get a filter that actually , rings , and goes to infinity . so it 's so there 's that technical reason , and the fact that things move , and there 's air currents there 's all sorts of reasons why it 's not really practical . so for all those kinds of reasons , we , concluded we did n't want to in do inversion , and we 're even pretty skeptical of echo cancellation , which is n't really inversion , and we decided to do this approach of taking , just picking features , which were will give you more something that was more stable , in the presence of , or absence of , room reverberation , and that 's what brian was trying to do . so , let me just say a couple things that i was gon na bring up . let 's see . i you actually already said this thing about the consent forms , which was that we now do n't have to so this was the human subjects folks who said this , or that ? postdoc b: the a , we 're gon na do a revised form , . but once a person has signed it once , then that 's valid for a certain number of meetings . she wanted me to actually estimate how many meetings and put that on the consent form . i told her that would be a little bit difficult to say . so from a s practical standpoint , maybe we could have them do it once every ten meetings , . it wo n't be that many people who do it that often , but just , so long as they do n't forget that they ' ve done it , i . professor e: , back on the data thing , so there 's this one hour , ten hour , a hundred hour thing that we have . we have we have an hour that is transcribed , we have twelve hours that 's recorded but not transcribed , and at the rate we 're going , by the end of the semester we 'll have , i , forty or fifty , if we if this really , do we have that much ? professor e: eight weeks times three hours is twenty - four , so that 's , so like thirty hours ? phd c: , is there i know this sounds tough but we ' ve got the room set up . i was starting to think of some projects where you would use , similar to what we talked about with energy detection on the close - talking mikes . there are a number of interesting questions that you can ask about how interactions happen in a meeting , that do n't require any transcription . so what are the patterns , the energy patterns over the meeting ? and i ' m really interested in this but we do n't have a whole lot of data . so i was thinking , we ' ve got the room set up and you can always think of , also for political reasons , if icsi collected , two hundred hours , that looks different than forty hours , even if we do n't transcribe it ourselves , professor e: but i do n't think we 're gon na stop at the end of this semester . professor e: so , i th that if we are able to keep that up for a few months , we are gon na have more like a hundred hours . phd c: , is there are there any other meetings here that we can record , especially meetings that have some conflict in them or some deci , that are less i do n't , that have some more emotional aspects to them , or strong phd c: there 's laughter , i ' m talking more about strong differences of opinion meetings , maybe with manager types , or grad g: , that 's a good idea . that 's that would be a good match . professor e: . so i , i 'd mentioned to adam , and that was another thing i was gon na talk , mention to them before that there 's it it oc it occurred to me that we might be able to get some additional data by talking to acquaintances in local broadcast media . because , we had talked before about the problem about using found data , that it 's just set up however they have it set up and we do n't have any say about it and it 's typically one microphone , in a , or and so it does n't really give us the characteristics we want . and so i do think we 're gon na continue recording here and record what we can . but , it did occur to me that we could go to friends in broadcast media and say " hey you have this panel show , or this , this discussion show , and can you record multi - channel ? " and they may be willing to record it with professor e: , they probably already use lapel , but they might be able to have it would n't be that weird for them to have another mike that was somewhat distant . it would n't be exactly this setup , but it would be that thing , and what we were gon na get from uw , assuming they start recording , is n't als also is not going to be this exact setup . phd c: right . no , that 'd be great , if we can get more data . professor e: so , i was thinking of looking into that . the other thing that occurred to me after we had that discussion , is that it 's even possible , since , many radio shows are not live , that we could invite them to have like some of their record some of their shows here . phd c: or , they 're not as averse to wearing one of these head - mount , they 're on the radio , phd c: so . , that 'd be fantastic cuz those kinds of panels and those have interesting th - that 's an a side of style a style that we 're not collecting here , it 'd be great . professor e: and and the , the other side to it was the what which is where we were coming from i 'll talk to you more about it later is that there 's the radio stations and television stations already have worked out presumably , related to , legal issues and permissions and all that . , they already do what they do whatever they do . so it 's , it 's so it 's so it 's another source . so it 's something we should look into , we 'll collect what we collect here hopefully they will collect more at uw also and maybe we have this other source . but that it 's not unreasonable to aim at getting , significantly in excess of a hundred hours . , that was our goal . the thing was , i was hoping that we could @ in the under this controlled situation we could at least collect , thirty to fifty hours . and at the rate we 're going we 'll get pretty close to that this semester . and if we continue to collect some next semester , we should , phd c: i was mostly trying to think , " ok , if you start a project , within say a month , how much data do you have to work with . and you wanna s you wanna fr freeze your data for awhile so right now and we do n't have the transcripts back yet from ibm do , do we now ? professor e: , we do n't even have it for this f , forty - five minutes , that was phd c: so , not complaining , i was just trying to think , what kinds of projects can you do now versus six months from now and they 're pretty different , because professor e: . so i was thinking right now it 's this exploratory where you look at the data , you use some primitive measures and get a feeling for what the scatter plots look like , professor e: and and meanwhile we collect , and it 's more like , three months from now , or six months from now you can do a lot of other things . phd c: cuz i ' m not actually , just logistically that spend , i do n't wanna charge the time that i have on the project too early , before there 's enough data to make good use of the time . and that 's and especially with the student this guy who seems anyway , i should n't say too much , but if someone came that was great and wanted to do some real work and they have to end by the end of this school year in the spring , how much data will i have to work with , with that person . and so it 's professor e: i , so i would think , exploratory things now . , three months from now , the transcriptions are a bit of an unknown cuz we have n't gotten those back yet as far as the timing , but as far as the collection , it does n't seem to me l like , unreasonable to say that in january , ro roughly which is roughly three months from now , we should have at least something like , twenty - five , thirty hours . postdoc b: , we need to that there 's a possibility that the transcript will need to be adjusted afterwards , postdoc b: and es especially since these people wo n't be used to dealing with multi - channel transcriptions . so that we 'll need to adjust some and also if we wanna add things like , more refined coding of overlaps , then definitely we should count on having an extra pass through . i wanted to ask another a aspect of the data collection . there 'd be no reason why a person could n't get together several , friends , and come and argue about a topic if they wanted to , right ? professor e: if they really have something they wanna talk about as opposed to something @ , what we 're trying to stay away from was artificial constructions , but if it 's a real why not ? phd c: or just if you 're if you ha if there are meetings here that happen that we can record even if we do n't have them do the digits , or maybe have them do a shorter digit thing like if it was , , one string of digits , they 'd probably be willing to do . phd c: then , having the data is very valuable , cuz it 's politically better for us to say we have this many hours of audio data , especially with the itr , if we put in a proposal on it . it 'll just look like icsi 's collected a lot more audio data . , whether it 's transcribed or not , is another issue , but there 's there are research questions you can answer without the transcriptions , or at least that you can start to answer . postdoc b: it seems like you could hold some meetings . , you and maybe adam ? you you could maybe hold some additional meetings , if you wanted . phd a: would it help , we 're already talking about two levels of detail in meetings . one is without doing the digits or , i the full - blown one is where you do the digits , and everything , and then talk about doing it without digits , what if we had another level , just to collect data , which is without the headsets and we just did the table - mounted . grad g: it seems like it 's a big part of this corpus is to have the close - talking mikes . phd c: or at least , like , me personally ? i would i could n't use that data . postdoc b: i agree . and mari also , we had this came up when she was here . that 's important . professor e: i b by the , i do n't think the transcriptions are actually , in the long run , such a big bottleneck . phd c: and if it were true than i would just do that , but it 's not that bad like the room is not the bottleneck , and we have enough time in the room , it 's getting the people to come in and put on the and get the setup going . professor e: the issue is just that we 're blazing that path . and and d do you have any idea when the you 'll be able to send the ten hours to them ? grad g: , i ' ve been burning two c ds a day , which is about all do with the time i have . so it 'll be early next week . professor e: so early next week we send it to them , and then we check with them to see if they ' ve got it and we start , asking about the timing for it . so once they get it sorted out about how they 're gon na do it , which they 're pretty along on , cuz they were able to read the files and so on . professor e: , but , so they have , they 're volunteering their time and they have a lot of other things to do , professor e: right ? but they but at any rate , they 'll once they get that sorted out , they 're making cassettes there , then they 're handing it to someone who they who 's who is doing it , and it 's not going to be i do n't think it 's going to be that much more of a deal for them to do thirty hours then to do one hour , . it 's not going to be thirty phd c: , i , if there 's any way without too much more overhead , even if we do n't ship it right away to ibm even if we just collect it here for awhile , to record , two or three more meeting a week , just to have the data , even if they 're not doing the digits , but they do wear the headphones ? phd c: no , i meant , , the meetings where people eat their lunch downstairs , maybe they do n't wanna be recorded , but grad g: , the problem with that is i would feel a little constrained to ? , some of the meetings grad g: , our " soccer ball " meeting ? i none of you were there for our soccer ball meeting . phd c: alright , so i 'll just throw it out there , if anyone knows of one more m or two more wee meetings per week that happen at icsi , that we could record , it would be worth it . professor e: . , we should also check with mari again , because they were really intending , maybe just did n't happen , but they were really intending to be duplicating this in some level . so then that would double what we had . and there 's a lot of different meetings at uw really m a lot more than we have here right cuz we 're not right on campus , so . phd a: is the , notion of recording any of chuck 's meetings dead in the water , or is that still a possibility ? professor e: , they seem to have some problems with it . we can we can talk about that later . , but , again , jerry is jerry 's open so , we have two speech meetings , one network meeting , jerry was open to it but i s one of the things that is a little bit of a limitation , there is a think when the people are not involved in our work , we probably ca n't do it every week . ? i that people are gon na feel a little bit constrained . now , it might get a little better if we do n't have them do the digits all the time . and the then so then they can just really try to put the mikes on and then just charge in and phd c: what if we give people , we cater a lunch in exchange for them having their meeting here ? postdoc b: , i do think eating while you 're doing a meeting is going to be increasing the noise . but i had another question , which is , in principle , w , i know that you do n't want artificial topics , postdoc b: but it does seem to me that we might be able to get subjects from campus to come down and do something that would n't be too artificial . , we could political discussions , or other , postdoc b: and i , people who are because , there 's also this constraint . we d it 's like , the goldibears goldi goldilocks , it 's like you do n't want meetings that are too large , but you do n't want meetings that are too small . and a and it just seems like maybe we could exploit the subj human subject p pool , in the positive sense of the word . grad g: really doubt that any of the state of california meetings would be recordable and then releasable to the general public . so i talked with some people at the haas business school who are i who are interested in speech recognition grad g: and , they hummed and hawed and said " maybe we could have meetings down here " , but then i got email from them that said " no , we decided we 're not really interested and we do n't wanna come down and hold meetings . " so , it 's gon na be a problem to get people regularly . professor e: but but we c but , we get some scattered things from this and that . and i d i do think that maybe we can get somewhere with the radio . i i have better contacts in radio than in television , but phd c: , and they 're already they 're these things are already recorded , we do n't have to ask them to even and i ' m not wh how they record it , but they must record from individual professor e: n no , i ' m not talking about ones that are already recorded . i ' m talking about new ones phd c: , we can find out . i know mark liberman was interested in ldc getting data , and professor e: right , that 's the found data idea . but what i ' m saying is if i talk to people that i know who do these th who produce these things we could ask them if they could record an extra channel , let 's say , of a distant mike . and u routinely they would not do this . so , since i ' m interested in the distant mike , i wanna make that there is at least that somewhere professor e: but if we ask them to do that they might be intrigued enough by the idea that they might be e willing to the i might be able to talk them into it . grad g: . we 're getting towards the end of our disk space , so we should think about trying to wrap up here . professor e: ok . i do n't why do n't we why d u why do n't we turn them turn grad g: ok , leave them on for a moment until i turn this off , cuz that 's when it crashed last time . ###summary: topics discussed by the berkeley meeting recorder group included a potential collaboration with another icsi member regarding the analysis of inference structures , efforts by speaker mn005 to detect speaker overlap , the current status on recordings and transcriptions , and future efforts to collect meeting data. in addition to weekly meetings by the bmr group , efforts are in progress to record meetings by other icsi research groups , as well as routine discussions by non-icsi members. the group will resume its discussion of speaker anonymization in a subsequent meeting. to save time , speaker mn005 will only mark the sample of transcribed data for regions of overlapping speech , as opposed to marking all acoustic events. the digits extraction task will be delegated to whomever is working on acoustics for the meeting recorder project. the group will inquire with other icsi researchers and colleagues at the university of washington about collecting additional meeting data. echo cancellation experiments will be performed on meeting recorder digits data. use of pronouns ( e.g . he , she , you ) during meetings is potentially problematic for indexing referents and analysing speech understanding. sensitivity issues involved in mentioning individuals by name were also discussed. data obtained via the close-talking microphone channels don't match meeting content yielded via the far-field microphones. the collection of meeting data from non-meeting recorder participants is likely to be more infrequent. a cursory review of a subset of meeting recorder data is in progress to determine whether it is suitable material for a collaboration with another icsi researcher interested in analyzing inference structures. efforts by speaker mn005 are in progress to detect overlapping speech. for a single transcribed meeting , speaker mn005 reported approximately 300 cases of overlap. future work will involve manually deriving time marks from sections of overlapping speech for the same meeting , and then experimenting with different measures , e.g . energy increase , to determine a set of acoustically salient features for identifying speaker overlap. approximately 12-13 hours of meeting recorder data have been collected , roughly 45 minutes of which have been transcribed. additional meetings by other icsi research groups will be recorded. a suggestion was made that multi-channel data also be collected in cooperation with local media broadcasters , and that such events might be recorded live from icsi. the group aims to collect over 100 hours of meeting recorder data in total. speaker consent forms are being revised. it was suggested that subjects should sign a new consent form after 10 recording sessions.
3
grad f: , so i wanted to discuss digits briefly , but that wo n't take too long . professor c: good . right . ok , agenda items , we have digits , what else we got ? postdoc b: , do we wanna say something about the , an update of the , transcript ? phd g: and i that includes some the filtering for the , the asi refs , too . phd g: for the references that we need to go from the fancy transcripts to the { nonvocalsound } brain - dead . postdoc b: it 'll it 'll be a re - cap of a meeting that we had jointly this morning . professor c: got it . anything else more pressing than those things ? so , why do n't we just do those . you said yours was brief , so grad f: ok . ok , the , w as you can see from the numbers on the digits we 're almost done . the digits goes up to about four thousand . , and so , we probably will be done with the ti - digits in , another couple weeks . , depending on how many we read each time . so there were a bunch that we skipped . , someone fills out the form and then they 're not at the meeting and so it 's blank . , but those are almost all filled in as . and so , once we 're it 's done it would be very to train up a recognizer and actually start working with this data . grad f: so , i extracted , ther - there was a file sitting around which people have used here as a test set . it had been randomized and so on and that 's just what i used to generate the order . of these particular ones . professor c: so , i ' m impressed by what we could do , is take the standard training set for ti - digits , train up with whatever , great features we think we have , and then test on this test set . and presumably it should do reasonably on that , and then , presumably , we should go to the distant mike , and it should do poorly . and then we should get really smart over the next year or two , and it that should get better . grad f: , but , in order to do that we need to extract out the actual digits . , so that the reason it 's not just a transcript is that there 're false starts , and misreads , and miscues and things like that . and so i have a set of scripts and x waves where you just select the portion , hit r , it tells you what the next one should be , and you just look for that . , so it 'll put on the screen , " the next set is six nine , nine two " . and you find that , and , hit the key and it records it in a file in a particular format . grad f: and so the question is , should we have the transcribers do that or should we just do it ? , some of us . i ' ve been do i ' ve done , eight meetings , something like that , just by hand . just myself , rather . so it will not take long . postdoc b: my feeling is that we discussed this right before coffee and it 's a fine idea partly because , it 's not un unrelated to their present skill set , but it will add , for them , an extra dimension , it might be an interesting break for them . and also it is contributing to the , c composition of the transcript cuz we can incorporate those numbers directly and it 'll be a more complete transcript . so i ' m it 's fine , that part . professor c: so you think it 's fine to have the transcribers do it ? , ok . grad f: there 's one other small bit , which is just entering the information which at s which is at the top of this form , onto the computer , to go along with the where the digits are recorded automatically . grad f: and so it 's just , typing in name , times time , date , and so on . , which again either they can do , but it is , firing up an editor , or , again , do . or someone else can do . postdoc b: and , that , i ' m not , that one i ' m not so if it 's into the , things that , i , wanted to use the hours for , because the , the time that they 'd be spending doing that they would n't be able to be putting more words on . phd d: so are these two separate tasks that can happen ? or do they have to happen at the same time before grad f: no they do n't have this you have to enter the data before , you do the second task , but they do n't have to happen at the same time . grad f: so it 's just i have a file whi which has this information on it , and then when you start using my scripts , for extracting the times , it adds the times at the bottom of the file . and so , , it 's easy to create the files and leave them blank , and so actually we could do it in either order . grad f: , it 's to have the same person do it just as a double - check , to make you 're entering for the right person . but , either way . professor c: just by way of , a , order of magnitude , , we ' ve been working with this aurora , data set . and , the best score , on the , nicest part of the data , that is , where you ' ve got training and test set that are the same kinds of noise and , is about , the best score was something like five percent , error , per digit . professor c: you 're right . so if you were doing ten digit , recognition , you would really be in trouble . so the the point there , and this is car noise , things , but real situation , professor c: , " real " , the there 's one microphone that 's close , that they have as this thing , close versus distant . but in a car , instead of having a projector noise it 's car noise . but it was n't artificially added to get some artificial signal - to - noise ratio . it was just people driving around in a car . so , that 's an indication , that was with , many sites competing , and this was the very best score and , so . more typical numbers like phd d: although the models were n't , that good , right ? , the models are pretty crappy ? professor c: that we could have done better on the models , but that we got this is the typical number , for all of the , things in this task , all of the , languages . and so we 'd probably the models would be better in some than in others . , so , . anyway , just an indication once you get into this realm even if you 're looking at connected digits it can be pretty hard . postdoc b: it 's gon na be fun to see how we , compare at this . very exciting . s @ . grad f: the prosodics are so much different s it 's gon na be , strange . the prosodics are not the same as ti - digits , . so i ' m not how much of effect that will have . grad f: , just what we were talking about with grouping . that with these , the grouping , there 's no grouping , and so it 's just the only discontinuity you have is at the beginning and the end . phd g: so there 's also the not just the prosody but the cross - word modeling is probably quite different . grad f: but in ti - digits , they 're reading things like zip codes and phone numbers and things like that , grad f: so it 's gon na be different . i do n't remember . , very good , right ? professor c: , i th no we got under a percent , but it was but it 's but . the very best system that i saw in the literature was a point two five percent that somebody had at bell labs , or . , but . but , pulling out all the stops . postdoc b: s @ . it s strikes me that there are more each of them is more informative because it 's so , random , professor c: but a lot of systems get half a percent , or three - quarters a percent , and we 're in there somewhere . grad f: but that it 's really it 's close - talking mikes , no noise , clean signal , just digits , every everything is good . grad f: yes , exactly . and we ' ve only recently got it to anywhere near human . grad f: and it 's still like an order of magnitude worse than what humans do . so . grad f: ok , so , what i 'll do then is i 'll go ahead and enter , this data . and then , hand off to jane , and the transcribers to do the actual extraction of the digits . professor c: one question i have that , we would n't know the answer to now but might , do some guessing , but i was talking before about doing some model modeling of arti , marking of articulatory , features , with overlap and so on . and , and , on some subset . one thought might be to do this , on the digits , or some piece of the digits . , it 'd be easier , and . the only thing is i ' m a little concerned that maybe the phenomena , in w i the reason for doing it is because the argument is that certainly with conversational speech , the that we ' ve looked at here before , just doing the simple mapping , from , the phone , to the corresponding features that you could look up in a book , is n't right . it is n't actually right . there 's these overlapping processes where some voicing some up and then some , some nasality is comes in here , and . and you do this gross thing saying " i it 's this phone starting there " . so , that 's the reasoning . but , it could be that when we 're reading digits , because it 's for such a limited set , that maybe that phenomenon does n't occur as much . i . di - an anybody ? do you have any ? anybody have any opinion about that , postdoc b: and that people might articulate more , and you that might end up with more a closer correspondence . grad f: it 's a would , this corpus really be the right one to even try that on ? phd g: it 's definitely true that , when people are , reading , even if they 're - reading what , they had said spontaneously , that they have very different patterns . mitch showed that , and some , dissertations have shown that . so the fact that they 're reading , first of all , whether they 're reading in a room of , people , or rea , just the fact that they 're reading will make a difference . and , depends what you 're interested in . professor c: see , i . so , may maybe the thing will be do to take some very small subset , not have a big , program , but take a small set , subset of the conversational speech and a small subset of the digits , and look and just get a feeling for it . , just take a look . really . postdoc b: h that could be an interesting design , too , cuz then you 'd have the com the comparison of the , predictable speech versus the less predictable speech professor c: cuz i do n't think anybody is , i at least , i , of anybody , , i , the answers . postdoc b: and maybe you 'd find that it worked in , in the , case of the pr of the , non - predictable . phd d: hafta think about , the particular acoustic features to mark , too , because , some things , they would n't be able to mark , like , , tense lax . some things are really difficult . , just listening . professor c: , but i was , like he said , i was gon na bring john in and ask john what he thought . professor c: but you want it be restrictive but you also want it to have coverage . i you should . it should be such that if you , if you had o , all of the features , determined that you were ch have chosen , that would tell you , in the steady - state case , the phone . so , . grad f: even , i with vowels that would be pretty hard , would n't it ? to identify actually , which one it is ? postdoc b: it would seem to me that the points of articulation would be m more , g , that 's about articulatory features , about , points of articulation , which means , rather than vowels . postdoc b: so , is it , bilabial or dental or is it , palatal . which which are all like where your tongue comes to rest . postdoc b: place . , what whatev whatever i s said , that 's i really meant place . phd g: it 's also , there 's , really a difference between , the pronunciation models in the dictionary , and , the pronunciations that people produce . and , so , you get , some of that information from steve 's work on the labeling and it really , i actually think that data should be used more . that maybe , although the meeting context is great , that he has transcriptions that give you the actual phone sequence . and you can go from not from that to the articulatory features , but that would be a better starting point for marking , the gestural features , then , data where you do n't have that , because , we you wanna know , both about the way that they 're producing a certain sound , and what kinds of , phonemic , differences you get between these , transcribed , sequences and the dictionary ones . professor c: you might be right that mi might be the way at getting at , what i was talking about , but the particular reason why i was interested in doing that was because i remember , when that happened , and , john ohala was over here and he was looking at the spectrograms of the more difficult ones . , he did n't to say , about , what is the sequence of phones there . they came up with some compromise . because that really was n't what it look like . it did n't look like a sequence of phones it look like this blending thing happening here and here . phd g: but right . but it still is there 's a there are two steps . one , one is going from a dictionary pronunciation of something , like , " gon na see you tomorrow " , phd g: it could be " going to " or " gon na " or " gonta s " . and , . gon na see you tomorrow , " guh see you tomorrow " . and , that it would be to have these , intermediate , or these some these reduced pronunciations that those transcribers had marked or to have people mark those as . because , it 's not , that easy to go from the , dictionary , word pronuncia the dictionary phone pronunciation , to the gestural one without this intermediate or a syllable level , representation . professor c: do you mean , , i ' m jus at the moment we 're just talking about what , to provide as a tool for people to do research who have different ideas about how to do it . so , you might have someone who just has a wor has words with states , and has , comes from articulatory gestures to that . and someone else , might actually want some phonetic intermediate thing . so it would be best to have all of it if we could . but , grad f: but what i ' m imagining is a score - like notation , where each line is a particular feature . right , so you would say , it 's voiced through here , and so you have label here , and you have nas nasal here , and , they could be overlapping in all sorts of bizarre ways that do n't correspond to the timing on phones . professor c: this is the reason why i remember when at one of the switchboard , workshops , that when we talked about doing the transcription project , dave talkin said , " ca n't be done " . he was he was , what he meant was that this is n't , a sequence of phones , and when you actually look at switchboard that 's , not what you see , and , . and . it , grad f: and the inter - annotator agreement was not that good , on the harder ones ? phd g: it depends how you look at it , and i understand what you 're saying about this , transcription exactly , because i ' ve seen , where does the voicing bar start and . all i ' m saying is that , it is useful to have that the transcription of what was really said , and which syllables were reduced . , if you 're gon na add the features it 's also useful to have some level of representation which is , is a reduced it 's a pronunciation variant , that currently the dictionaries do n't give you because if you add them to the dictionary and you run recognition , you add confusion . so people purposely do n't add them . so it 's useful to know which variant was produced , at least at the phone level . phd d: so it would be great if we had , either these , labelings on , the same portion of switchboard that steve marked , or , steve 's type markings on this data , with these . phd g: and steve 's type is fairly it 's not that slow , , exactly what the , timing was , but . professor c: u i do n't disagree with it the on the only thing is that , what you actually will end en end up with is something , i it 's all compromised , right , so , the string that you end up with is n't , actually , what happened . but it 's the best compromise that a group of people scratching their heads could come up with to describe what happened . professor c: but . and it 's more accurate than the dictionary or , if you ' ve got a pronunciation lexicon that has three or four , professor c: this might be have been the fifth one that you tr that you pruned or whatever , phd g: that 's what i meant is an and in some places it would fill in , so the kinds of gestural features are not everywhere . phd g: so there are some things that you do n't have access to either from your ear or the spectrogram , but what phone it was and that 's about all you can say . and then there are other cases where , nasality , voicing phd d: it 's just having , multiple levels of , information and marking , on the signal . grad f: the other difference is that the features , are not synchronous , right . they overlap each other in weird ways . so it 's not a strictly one - dimensional signal . so that 's sorta qualitatively different . phd g: you can add the features in , but it 'll be underspecified . th - there 'll be no way for you to actually mark what was said completely by features . grad f: not with our current system but you could imagine designing a system , that the states were features , rather than phones . phd g: and i if you 're , we ' ve probably have a separate , discussion of , of whether you can do that . postdoc b: that 's , is n't that was , but that was n't that kinda the direction ? professor c: , so , what where this is , i want would like to have something that 's useful to people other than those who are doing the specific research i have in mind , so it should be something broader . but , the but where i ' m coming from is , we 're coming off of that larry saul did with , , john dalan and muzim rahim in which , they , have , a m a multi - band system that is , trained through a combination of gradient learning an and em , to , estimate , the , value for m for a particular feature . ok . and this is part of a larger , image that john dalan has about how the human brain does it in which he 's imagining that , individual frequency channels are coming up with their own estimate , of these , these kinds of something like this . might not be , exact features that , jakobson thought of . but some , something like that . some low - level features , which are not , fully , phone classification . and the th this particular image , of how thi how it 's done , is that , then given all of these estimates at that level , there 's a level above it , then which is making , some sound unit classification such as , phone and , . you could argue what , what a sound unit should be , and . but that 's what i was imagining doing , and but it 's still open within that whether you would have an intermediate level in which it was actually phones , or not . you would n't necessarily have to . , but , again , i would n't wanna , would n't want what we produced to be so , know , local in perspective that it was matched , what we were thinking of doing one week , and and , what you 're saying is right . that , that if we , can we should put in , another level of , of description there if we 're gon na get into some of this low - level . phd d: , if we 're talking about , having the , annotators annotate these kinds of features , it seems like , you the the question is , do they do that on , meeting data ? or do they do that on , switchboard ? postdoc b: , i was thinking that it would be interesting , to do it with respect to , parts of switchboard anyway , in terms of , partly to see , if you could , generate first guesses at what the articulatory feature would be , based on the phone representation at that lower level . it might be a time gain . but also in terms of comparability of , phd d: cuz the , and then also , if you did it on switchboard , you would have , the full continuum of transcriptions . phd d: you 'd have it , from the lowest level , the ac acoustic features , then you 'd have the , the phonetic level that steve did , professor c: it 's so it 's a little different . so i we 'll see wha how much we can , get the people to do , and how much money we 'll have and all this thing , phd d: but it might be good to do what jane was saying , seed it , with , guesses about what we think the features are , based on , the phone or steve 's transcriptions . to make it quicker . grad f: alright , so based on the phone transcripts they would all be synchronous , but then you could imagine , nudging them here and there . professor c: what i ' m a l little behind in what they 're doing , now , and , the they 're doing on switchboard now . but that , steve and the gang are doing , something with an automatic system first and then doing some adjustment . as i re as i recall . so that 's probably the right way to go anyway , is to start off with an automatic system with a pretty rich pronunciation dictionary that , , tries , to label it all . and then , people go through and fix it . postdoc b: so in our case you 'd think about us s starting with maybe the regular dictionary entry , and then ? or would we professor c: , regular dictionary , this is a pretty rich dictionary . it 's got , got a fair number of pronunciations in it phd d: or you could start from the if we were gon na , do the same set , of sentences that steve had , done , we could start with those transcriptions . phd g: that 's actually what i was thinking , is tha the problem is when you run , if you run a regular dictionary , even if you have variants , in there , which most people do n't , you do n't always get , out , the actual pronunciations , so that 's why the human transcriber 's giving you the that pronunciation , phd g: we should catch up on what steve is , that would be a good i good idea . professor c: , so that i we also do n't have , we ' ve got a good start on it , but we do n't have a really good , meeting , recorder or recognizer or transcriber or anything yet , so , another way to look at this is to , do some on switchboard which has all this other , to it . and then , as we get , further down the road and we can do more things ahead of time , we can , do some of the same things to the meeting data . postdoc b: and i ' m and these people might they are , s most of them are trained with ipa . they 'd be able to do phonetic - level coding , or articulatory . postdoc b: , they 're interested in continuing working with us , so i , and this would be up their alley , so , we could when the when you d meet with , with john ohala and find , what taxonomy you want to apply , then , they 'd be , good to train onto it . grad f: to to , you 'd wanna iterate , somehow . it 's interesting thing to think about . phd g: it might be neat to do some , phonetic , features on these , nonword words . are are these kinds of words that people never the " " s and the " and the these k no , i ' m serious . there are all these kinds of functional , elements . i what you call them . but not just fill pauses but all kinds of ways of interrupting and . phd g: and some of them are , " - " s , and " " s , and , " ! " ok " , " grunts , that might be interesting . phd a: , , i worked a little bit on the presegmentation to get another version which does channel - specific , speech - nonspeech detection . and , what i did is i used some normalized features which , look in into the which is normalized energy , energy normalized by the mean over the channels and by the , minimum over the , other . within each channel . and to , to normalize also loudness and modified loudness and things and that those special features actually are in my feature vector . and , and , therefore to be able to , somewhat distinguish between foreground and background speech in the different in each channel . and , i tested it on three or four meetings and it seems to work , fairly , i would say . there are some problems with the lapel mike . , . grad f: so i understand that 's what you were saying about your problem with , minimum . phd a: as there are some problems in , when , in the channel , there they the speaker does n't talk much or does n't talk . then , the , there are some problems with n with normalization , and , then , there the system does n't work . so , i ' m glad that there is the digit part , where everybody is forced to say something , so , that 's great for my purpose . and , i , then the evaluation of the system is a little bit hard , as i do n't have any references . phd a: , that 's the one wh where i do the training on so i ca n't do the evaluation on so , can the transcribers perhaps do some , some meetings in terms of speech - nonspeech in the specific channels ? postdoc b: so , i might have done what you 're requesting , though i did it in the service of a different thing . postdoc b: i have thirty minutes that i ' ve more tightly transcribed with reference to individual channels . phd a: ok . ok , that 's great . that 's great for me . , so . postdoc b: so , e so the , we have the , th they transcribe as if it 's one channel with these with the slashes to separate the overlapping parts . and then we run it through then it then i ' m gon na edit it and i ' m gon na run it through channelize which takes it into dave gelbart 's form format . and then you have , all these things split across according to channel , and then that means that , if a person contributed more than once in a given , overlap during that time bend that two parts of the utterance end up together , it 's the same channel , and then i took his tool , and last night for the first thirty minutes of one of these transcripts , i , tightened up the , boundaries on individual speakers ' channels , cuz his interface allows me to have total flexibility in the time tags across the channels . and , so . phd a: so , that 's great , but what would be to have some more meetings , not just one meeting to be that , there is a system , grad f: , so if we could get a couple meetings done with that level of precision that would be a good idea . postdoc b: , ok . , how m much time so the meetings vary in length , what are we talking about in terms of the number of minutes you 'd like to have as your training set ? phd a: it seems to me that it would be good to have , a few minutes from different meetings , so . but i ' m not about how much . postdoc b: ok , now you 're saying different meetings because of different speakers or because of different audio quality or both or ? professor c: , we do n't have that much variety in meetings yet , we have this meeting and the feature meeting and we have a couple others that we have , couple examples of . but but , phd g: we can try running we have n't done this yet because , , andreas an is gon na move over the sri recognizer . i i ran out of machines at sri , cuz we 're running the evals and do n't have machine time there . but , once that 's moved over , hopefully in a couple days , then , we can take , what jane just told us about as , the presegmented , { nonvocalsound } the segmentations that you did , at level eight or som at some , threshold that jane , tha right , and try doing , forced alignment . , on the word strings . phd g: and if it 's good , then that will that may give you a good boundary . if it 's good , we do n't then we 're fine , phd g: but , i yet whether these , segments that contain a lot of pauses around the words , will work or not . phd a: i would quite like to have some manually transcribed references for the system , as i ' m not if it 's really good to compare with some other automatic , found boundaries . postdoc b: , no , if we were to start with this and then tweak it h manually , would that would be ok ? phd g: they might be ok . it it really depends on a lot of things , but , i would have maybe a transciber , look at the result of a forced alignment and then adjust those . phd g: that might save some time . if they 're horrible it wo n't help , but they might not be horrible . so but i 'll let when we , have that . postdoc b: how many minutes would you want from , we could easily , get a section , like say a minute or so , from every meeting that we have so f from the newer ones that we 're working on , everyone that we have . and then , should provide this . phd a: if it 's not the first minute of the meeting , that 's ok with me , but , in the first minute , often there are some strange things going on which are n't really , for , which are n't re really good . so . what what i 'd quite like , perhaps , is , to have , some five minutes of different meetings , postdoc b: somewhere not in the very beginning , five minutes , ok . and , then i wanted to ask you just for my inter information , then , would you , be trai cuz i do n't quite unders so , would you be training then , the segmenter so that , it could , on the basis of that , segment the rest of the meeting ? so , if i give you like five minutes is the idea that this would then be applied to , providing tighter time bands ? phd a: that 's but i hope that i do n't need to do it . so , it c can be do in an unsupervised way . so . phd a: i ' m not , but , for those three meetings whi which i did , it seems to be , quite , but , there are some as i said some problems with the lapel mike , but , perhaps we can do something with cross - correlations to , to get rid of the of those . that 's that 's what i that 's my future work . what i want to do is to look into cross - correlations for removing those , false overlaps . phd g: are the , wireless , different than the wired , mikes , ? , have you noticed any difference ? phd a: i ' m not , if there are any wired mikes in those meetings , or , i have to loo have a look at them but , i ' m there 's no difference between , postdoc b: ok , so then , if that 's five minutes per meeting we ' ve got like twelve minutes , twelve meetings , roughly , that i ' m that i ' ve been working with , then professor c: of of the meetings that you 're working with , how many of them are different , tha postdoc b: , just from what i ' ve seen , there are some where , you 're present or not present , and , then you have the difference between the networks group and this group professor c: so i did n't know in the group you had if you had so you have the networks meeting ? postdoc b: we could , you recorded one last week or so . i could get that new one in this week i get that new one in . professor c: and having as much variety for speaker certainly would be a big part of that . postdoc b: ok , so if i , ok , included include , ok , then , if i were to include all together samples from twelve meetings that would only take an hour and i could get the transcribers to do that right , what is , that would be an hour sampled , and then they 'd transcribe those that hour , right ? that 's what i should do ? postdoc b: i mean adjust . so they get it into the multi - channel format and then adjust the timebands so it 's precise . postdoc b: i did , , so , last night i did , gosh , last night , i did about half an hour in , three hours , which is not , terrific , but , anyway , it 's an hour and a half per phd a: do the transcribers actually start wi with , transcribing new meetings , or are they ? postdoc b: , they 're still working they still have enough to finish that i have n't assigned a new meeting , but the next , m i was about to need to assign a new meeting and i was going to take it from one of the new ones , and i could easily give them jerry feldman 's meeting , no problem . and , then professor c: they 're running out of data unless we s make the decision that we should go over and start , transcribing the other set . postdoc b: and so i was in the process of like editing them but this is wonderful news . postdoc b: we funded the experiment with , also we were thinking maybe applying that to getting the , that 'll be , very useful to getting the overlaps to be more precise all the way through . postdoc b: yes , it does . so , , liz , and don , and i met this morning , in the barco room , with the lecture hall , postdoc b: and this afternoon , it drifted into the afternoon , concerning this issue of , the , there 's the issue of the interplay between the transcript format and the processing that , they need to do for , the sri recognizer . and , , so , i mentioned the process that i ' m going through with the data , so , i get the data back from the transcri , s , metaphorically , get the data back from the transcriber , and then i , check for simple things like spelling errors and things like that . and , i ' m going to be doing a more thorough editing , with respect to consistency of the conventions . but they 're generally very good . and , then , i run it through , the channelize program to get it into the multi - channel format , ok . and the , what we discussed this morning , i would summarize as saying that , these units that result , in a particular channel and a particular timeband , at that level , vary in length . and , { nonvocalsound } their recognizer would prefer that the units not be overly long . but it 's really an empirical question , whether the units we get at this point through , just that process i described might be sufficient for them . so , as a first pass through , a first chance without having to do a lot of hand - editing , what we 're gon na do , is , i 'll run it through channelize , give them those data after i ' ve done the editing process and be it 's clean . and do that , pretty quickly , with just , that minimal editing , without having to hand - break things . and then we 'll see if the units that we 're getting , with the at that level , are sufficient . and maybe they do n't need to be further broken down . and if they do need to be further broken down then maybe it just be piece - wise , maybe it wo n't be the whole thing . so , that 's what we were discussing , this morning as far as i among also we discussed some adaptational things , postdoc b: so it 's like , i had n't , incorporated , a convention explicitly to handle acronyms , but if someone says , pzm it would be to have that be directly interpretable from , the transcript what they said , or pi - tcl tcl . it 's like y it 's and so , i ' ve incorporated also convention , with that but that 's easy to handle at the post editing phase , and i 'll mention it to , transcribers for the next phase but that 's ok . and then , a similar conv , convention for numbers . so if they say one - eighty - three versus one eight three . , and also i 'll be , encoding , as i do my post - editing , the , things that are in curly brackets , which are clarificational material . and to incorporate , keyword , at the beginning . so , it 's gon na be either a gloss or it 's gon na be a vocal sound like a , laugh or a cough , or , . or a non - vocal sound like a doors door - slam , and that can be easily done with a , just a one little additional thing in the , in the general format . phd g: we j we just needed a way to , strip , all the comments , all the things th the that linguist wants but the recognizer ca n't do anything with . , but to keep things that we mapped to like reject models , or , , mouth noise , or , cough . and then there 's this interesting issue jane brought up which i had n't thought about before but i was , realizing as i went through the transcripts , that there are some noises like , the good example was an inbreath , where a transcriber working from , the mixed , signal , does n't know whose breath it is , and they ' ve been assigning it to someone that may or may not be correct . and what we do is , if it 's a breath sound , a sound from the speaker , we map it , to , a noise model , like a mouth - noise model in the recognizer , and , it probably does n't hurt that much once in a while to have these , but , if they 're in the wrong channel , that 's , not a good idea . and then there 's also , things like door - slams that 's really in no one 's channel , they 're like it 's in the room . and , jane had this , idea of having , like an extra , couple tiers , phd g: and we were thinking , that is useful also when there 's uncertainties . so if they hear a breath and they who breath it is it 's better to put it in that channel than to put it in the speaker 's channel because maybe it was someone else 's breath , or , so that 's a good you can always clean that up , post - processing . so a lot of little details , but we 're , coming to some kinda closure , on that . so the idea is then , don can take , jane 's post - processed channelized version , and , with some scripts , convert that to a reference for the recognizer and we can , can run these . so when that 's , ready , as soon as that 's ready , and as soon as the recognizer is here we can get , twelve hours of force - aligned and recognized data . and , start , working on it , so we 're , a coup a week or two away i would say from , if that process is automatic once we get your post - process , transcript . postdoc b: and that does n't the amount of editing that it would require is not very much either . i ' m just hoping that the units that are provided in that way , { nonvocalsound } will be sufficient cuz i would save a lot of , time , dividing things . phd g: , some of them are quite long . just from how long were you did one ? grad e: i saw a couple , around twenty seconds , and that was just without looking too hard for it , so , i would imagine that there might be some that are longer . postdoc b: n one question , e w would that be a single speaker or is that multiple speakers overlapping ? grad e: no . no , but if we 're gon na segment it , like if there 's one speaker in there , that says " ok " , right in the middle , it 's gon na have a lot of dead time around it , phd g: right . it 's not the it 's not the fact that we ca n't process a twenty second segment , it 's the fact that , there 's twenty seconds in which to place one word in the wrong place phd g: , if someone has a very short utterance there , and that 's where , we , might wanna have this individual , ha have your pre - process input . phd a: i that perhaps the transcribers could start then from the those mult multi - channel , speech - nonspeech detections , if they would like to . postdoc b: in in doing the hand - marking ? that 's what i was thinking , too . phd g: so that 's probably what will happen , but we 'll try it this way and see . it 's probably good enough for force - alignment . if it 's not then we 're really then we def definitely , but for free recognition i ' m it 'll probably not be good enough . we 'll probably get lots of errors because of the cross - talk , and , noises and things . grad f: right , we don so the only thing we 'll have extra now is just the lapel . not not the , bodypack , just the lapel . grad f: , and then one of the one of those . since , what i decided to do , on morgan 's suggestion , was just get two , new microphones , and try them out . and then , if we like them we 'll get more . grad f: since they 're like two hundred bucks a piece , we wo n't , at least try them out . grad f: it 's , it 's by crown , and it 's one of these mount around the ear thingies , and , when i s when i mentioned that we thought it was uncomfortable he said it was a common problem with the sony . and this is how a lot of people are getting around it . and i checked on the web , and every site i went to , raved about this particular mike . it 's comfortable and stays on the head , so we 'll see if it 's any good . but , it 's promising . professor c: for the recor for the record adam is not a paid employee or a consultant of crown . professor c: i said " for the record adam is not a paid consultant or employee of crown " . grad f: these are crown are n't they ? the p z ms are crown , they were . professor c: so if we go to a workshop about all this it 's gon na be a meeting about meetings .
the berkeley meeting recorder group discussed the collection status for a set of connected digits recordings that are nearly complete and ready to be trained on a recognizer. anticipated results were discussed in reference to results obtained for other digits corpora , i.e . aurora and ti-digits. the group also considered the prospect of performing fine-grained acoustic-phonetic analyses on a subset of meeting recorder digits or switchboard data. pre-segmentation manipulations that allow for the segmentation of channel-specific speech/non-speech portions of the signal and the distinction of foreground versus background speech were discussed. finally , speaker fe008 and fe016 reported on new efforts to adapt transcriptions to the needs of the sri recognizer , including conventions for encoding acronyms , numbers , ambient noise , and unidentified inbreaths. the group decided to delegate the extraction of digits to the transcriber pool. a tentative decision was also made to delegate transcribers with the task of labelling a subset of digits or switchboard data for fine-grained acoustic-phonetic features. speaker fe008 will run selected meeting recorder data through channelize and determine whether the resulting units are of a sufficient length. with respect to encoding more fine-grained acoustic information in transcriptions , the question was posed: which features should be marked? speaker mn014 reported problems pre-segmenting speech recorded via the lapel microphones. normalization of the energy measured across and within channels is problematic when performed for speakers who say little or nothing during meetings. the evaluation of pre-segmented data is difficult without tightly transcribed time references to the individual channels from which the speech was derived. the sri recognizer requires that multi-channel format units not be too large , indicating that some additional pre-processing of unit lengths may be necessary. a test set of meeting recorder digits is nearly complete. future work will include training this data on a recognizer , and feeding the recognizer with corresponding far-field microphone data. it was noted that the results of experiments testing similar digits corpora have yielded high error rates , indicating that similar problems may be expected for the set of meeting recoreder digits. the group discussed the prospect of performing fine-grained acoustic-phonetic analyses on a subset of digits or switchboard data. it was suggested that prior to the use of data-driven methods , knowledge-driven approaches should be used to 'seed' the data with sub-phonemic features , either manually , or using a rich pronunciation dictionary. a new version of the pre-segmentation tool that segments channel-specific speech/non-speech portions of the signal has been developed and tested. future pre-segmentation work will include normalizing other features , such as loudness , enabling the distinction of foreground versus background speech. speaker mn014 will also look at cross-correlations for removing false overlaps. new efforts were reported to adapt transcriptions to the needs of the sri recognizer , including conventions for encoding acronyms , numbers , ambient noise , and unidentified inbreaths. with the arrival of the sri recognizer , 12 hours of forced aligned , recognized data can be expected.
###dialogue: grad f: , so i wanted to discuss digits briefly , but that wo n't take too long . professor c: good . right . ok , agenda items , we have digits , what else we got ? postdoc b: , do we wanna say something about the , an update of the , transcript ? phd g: and i that includes some the filtering for the , the asi refs , too . phd g: for the references that we need to go from the fancy transcripts to the { nonvocalsound } brain - dead . postdoc b: it 'll it 'll be a re - cap of a meeting that we had jointly this morning . professor c: got it . anything else more pressing than those things ? so , why do n't we just do those . you said yours was brief , so grad f: ok . ok , the , w as you can see from the numbers on the digits we 're almost done . the digits goes up to about four thousand . , and so , we probably will be done with the ti - digits in , another couple weeks . , depending on how many we read each time . so there were a bunch that we skipped . , someone fills out the form and then they 're not at the meeting and so it 's blank . , but those are almost all filled in as . and so , once we 're it 's done it would be very to train up a recognizer and actually start working with this data . grad f: so , i extracted , ther - there was a file sitting around which people have used here as a test set . it had been randomized and so on and that 's just what i used to generate the order . of these particular ones . professor c: so , i ' m impressed by what we could do , is take the standard training set for ti - digits , train up with whatever , great features we think we have , and then test on this test set . and presumably it should do reasonably on that , and then , presumably , we should go to the distant mike , and it should do poorly . and then we should get really smart over the next year or two , and it that should get better . grad f: , but , in order to do that we need to extract out the actual digits . , so that the reason it 's not just a transcript is that there 're false starts , and misreads , and miscues and things like that . and so i have a set of scripts and x waves where you just select the portion , hit r , it tells you what the next one should be , and you just look for that . , so it 'll put on the screen , " the next set is six nine , nine two " . and you find that , and , hit the key and it records it in a file in a particular format . grad f: and so the question is , should we have the transcribers do that or should we just do it ? , some of us . i ' ve been do i ' ve done , eight meetings , something like that , just by hand . just myself , rather . so it will not take long . postdoc b: my feeling is that we discussed this right before coffee and it 's a fine idea partly because , it 's not un unrelated to their present skill set , but it will add , for them , an extra dimension , it might be an interesting break for them . and also it is contributing to the , c composition of the transcript cuz we can incorporate those numbers directly and it 'll be a more complete transcript . so i ' m it 's fine , that part . professor c: so you think it 's fine to have the transcribers do it ? , ok . grad f: there 's one other small bit , which is just entering the information which at s which is at the top of this form , onto the computer , to go along with the where the digits are recorded automatically . grad f: and so it 's just , typing in name , times time , date , and so on . , which again either they can do , but it is , firing up an editor , or , again , do . or someone else can do . postdoc b: and , that , i ' m not , that one i ' m not so if it 's into the , things that , i , wanted to use the hours for , because the , the time that they 'd be spending doing that they would n't be able to be putting more words on . phd d: so are these two separate tasks that can happen ? or do they have to happen at the same time before grad f: no they do n't have this you have to enter the data before , you do the second task , but they do n't have to happen at the same time . grad f: so it 's just i have a file whi which has this information on it , and then when you start using my scripts , for extracting the times , it adds the times at the bottom of the file . and so , , it 's easy to create the files and leave them blank , and so actually we could do it in either order . grad f: , it 's to have the same person do it just as a double - check , to make you 're entering for the right person . but , either way . professor c: just by way of , a , order of magnitude , , we ' ve been working with this aurora , data set . and , the best score , on the , nicest part of the data , that is , where you ' ve got training and test set that are the same kinds of noise and , is about , the best score was something like five percent , error , per digit . professor c: you 're right . so if you were doing ten digit , recognition , you would really be in trouble . so the the point there , and this is car noise , things , but real situation , professor c: , " real " , the there 's one microphone that 's close , that they have as this thing , close versus distant . but in a car , instead of having a projector noise it 's car noise . but it was n't artificially added to get some artificial signal - to - noise ratio . it was just people driving around in a car . so , that 's an indication , that was with , many sites competing , and this was the very best score and , so . more typical numbers like phd d: although the models were n't , that good , right ? , the models are pretty crappy ? professor c: that we could have done better on the models , but that we got this is the typical number , for all of the , things in this task , all of the , languages . and so we 'd probably the models would be better in some than in others . , so , . anyway , just an indication once you get into this realm even if you 're looking at connected digits it can be pretty hard . postdoc b: it 's gon na be fun to see how we , compare at this . very exciting . s @ . grad f: the prosodics are so much different s it 's gon na be , strange . the prosodics are not the same as ti - digits , . so i ' m not how much of effect that will have . grad f: , just what we were talking about with grouping . that with these , the grouping , there 's no grouping , and so it 's just the only discontinuity you have is at the beginning and the end . phd g: so there 's also the not just the prosody but the cross - word modeling is probably quite different . grad f: but in ti - digits , they 're reading things like zip codes and phone numbers and things like that , grad f: so it 's gon na be different . i do n't remember . , very good , right ? professor c: , i th no we got under a percent , but it was but it 's but . the very best system that i saw in the literature was a point two five percent that somebody had at bell labs , or . , but . but , pulling out all the stops . postdoc b: s @ . it s strikes me that there are more each of them is more informative because it 's so , random , professor c: but a lot of systems get half a percent , or three - quarters a percent , and we 're in there somewhere . grad f: but that it 's really it 's close - talking mikes , no noise , clean signal , just digits , every everything is good . grad f: yes , exactly . and we ' ve only recently got it to anywhere near human . grad f: and it 's still like an order of magnitude worse than what humans do . so . grad f: ok , so , what i 'll do then is i 'll go ahead and enter , this data . and then , hand off to jane , and the transcribers to do the actual extraction of the digits . professor c: one question i have that , we would n't know the answer to now but might , do some guessing , but i was talking before about doing some model modeling of arti , marking of articulatory , features , with overlap and so on . and , and , on some subset . one thought might be to do this , on the digits , or some piece of the digits . , it 'd be easier , and . the only thing is i ' m a little concerned that maybe the phenomena , in w i the reason for doing it is because the argument is that certainly with conversational speech , the that we ' ve looked at here before , just doing the simple mapping , from , the phone , to the corresponding features that you could look up in a book , is n't right . it is n't actually right . there 's these overlapping processes where some voicing some up and then some , some nasality is comes in here , and . and you do this gross thing saying " i it 's this phone starting there " . so , that 's the reasoning . but , it could be that when we 're reading digits , because it 's for such a limited set , that maybe that phenomenon does n't occur as much . i . di - an anybody ? do you have any ? anybody have any opinion about that , postdoc b: and that people might articulate more , and you that might end up with more a closer correspondence . grad f: it 's a would , this corpus really be the right one to even try that on ? phd g: it 's definitely true that , when people are , reading , even if they 're - reading what , they had said spontaneously , that they have very different patterns . mitch showed that , and some , dissertations have shown that . so the fact that they 're reading , first of all , whether they 're reading in a room of , people , or rea , just the fact that they 're reading will make a difference . and , depends what you 're interested in . professor c: see , i . so , may maybe the thing will be do to take some very small subset , not have a big , program , but take a small set , subset of the conversational speech and a small subset of the digits , and look and just get a feeling for it . , just take a look . really . postdoc b: h that could be an interesting design , too , cuz then you 'd have the com the comparison of the , predictable speech versus the less predictable speech professor c: cuz i do n't think anybody is , i at least , i , of anybody , , i , the answers . postdoc b: and maybe you 'd find that it worked in , in the , case of the pr of the , non - predictable . phd d: hafta think about , the particular acoustic features to mark , too , because , some things , they would n't be able to mark , like , , tense lax . some things are really difficult . , just listening . professor c: , but i was , like he said , i was gon na bring john in and ask john what he thought . professor c: but you want it be restrictive but you also want it to have coverage . i you should . it should be such that if you , if you had o , all of the features , determined that you were ch have chosen , that would tell you , in the steady - state case , the phone . so , . grad f: even , i with vowels that would be pretty hard , would n't it ? to identify actually , which one it is ? postdoc b: it would seem to me that the points of articulation would be m more , g , that 's about articulatory features , about , points of articulation , which means , rather than vowels . postdoc b: so , is it , bilabial or dental or is it , palatal . which which are all like where your tongue comes to rest . postdoc b: place . , what whatev whatever i s said , that 's i really meant place . phd g: it 's also , there 's , really a difference between , the pronunciation models in the dictionary , and , the pronunciations that people produce . and , so , you get , some of that information from steve 's work on the labeling and it really , i actually think that data should be used more . that maybe , although the meeting context is great , that he has transcriptions that give you the actual phone sequence . and you can go from not from that to the articulatory features , but that would be a better starting point for marking , the gestural features , then , data where you do n't have that , because , we you wanna know , both about the way that they 're producing a certain sound , and what kinds of , phonemic , differences you get between these , transcribed , sequences and the dictionary ones . professor c: you might be right that mi might be the way at getting at , what i was talking about , but the particular reason why i was interested in doing that was because i remember , when that happened , and , john ohala was over here and he was looking at the spectrograms of the more difficult ones . , he did n't to say , about , what is the sequence of phones there . they came up with some compromise . because that really was n't what it look like . it did n't look like a sequence of phones it look like this blending thing happening here and here . phd g: but right . but it still is there 's a there are two steps . one , one is going from a dictionary pronunciation of something , like , " gon na see you tomorrow " , phd g: it could be " going to " or " gon na " or " gonta s " . and , . gon na see you tomorrow , " guh see you tomorrow " . and , that it would be to have these , intermediate , or these some these reduced pronunciations that those transcribers had marked or to have people mark those as . because , it 's not , that easy to go from the , dictionary , word pronuncia the dictionary phone pronunciation , to the gestural one without this intermediate or a syllable level , representation . professor c: do you mean , , i ' m jus at the moment we 're just talking about what , to provide as a tool for people to do research who have different ideas about how to do it . so , you might have someone who just has a wor has words with states , and has , comes from articulatory gestures to that . and someone else , might actually want some phonetic intermediate thing . so it would be best to have all of it if we could . but , grad f: but what i ' m imagining is a score - like notation , where each line is a particular feature . right , so you would say , it 's voiced through here , and so you have label here , and you have nas nasal here , and , they could be overlapping in all sorts of bizarre ways that do n't correspond to the timing on phones . professor c: this is the reason why i remember when at one of the switchboard , workshops , that when we talked about doing the transcription project , dave talkin said , " ca n't be done " . he was he was , what he meant was that this is n't , a sequence of phones , and when you actually look at switchboard that 's , not what you see , and , . and . it , grad f: and the inter - annotator agreement was not that good , on the harder ones ? phd g: it depends how you look at it , and i understand what you 're saying about this , transcription exactly , because i ' ve seen , where does the voicing bar start and . all i ' m saying is that , it is useful to have that the transcription of what was really said , and which syllables were reduced . , if you 're gon na add the features it 's also useful to have some level of representation which is , is a reduced it 's a pronunciation variant , that currently the dictionaries do n't give you because if you add them to the dictionary and you run recognition , you add confusion . so people purposely do n't add them . so it 's useful to know which variant was produced , at least at the phone level . phd d: so it would be great if we had , either these , labelings on , the same portion of switchboard that steve marked , or , steve 's type markings on this data , with these . phd g: and steve 's type is fairly it 's not that slow , , exactly what the , timing was , but . professor c: u i do n't disagree with it the on the only thing is that , what you actually will end en end up with is something , i it 's all compromised , right , so , the string that you end up with is n't , actually , what happened . but it 's the best compromise that a group of people scratching their heads could come up with to describe what happened . professor c: but . and it 's more accurate than the dictionary or , if you ' ve got a pronunciation lexicon that has three or four , professor c: this might be have been the fifth one that you tr that you pruned or whatever , phd g: that 's what i meant is an and in some places it would fill in , so the kinds of gestural features are not everywhere . phd g: so there are some things that you do n't have access to either from your ear or the spectrogram , but what phone it was and that 's about all you can say . and then there are other cases where , nasality , voicing phd d: it 's just having , multiple levels of , information and marking , on the signal . grad f: the other difference is that the features , are not synchronous , right . they overlap each other in weird ways . so it 's not a strictly one - dimensional signal . so that 's sorta qualitatively different . phd g: you can add the features in , but it 'll be underspecified . th - there 'll be no way for you to actually mark what was said completely by features . grad f: not with our current system but you could imagine designing a system , that the states were features , rather than phones . phd g: and i if you 're , we ' ve probably have a separate , discussion of , of whether you can do that . postdoc b: that 's , is n't that was , but that was n't that kinda the direction ? professor c: , so , what where this is , i want would like to have something that 's useful to people other than those who are doing the specific research i have in mind , so it should be something broader . but , the but where i ' m coming from is , we 're coming off of that larry saul did with , , john dalan and muzim rahim in which , they , have , a m a multi - band system that is , trained through a combination of gradient learning an and em , to , estimate , the , value for m for a particular feature . ok . and this is part of a larger , image that john dalan has about how the human brain does it in which he 's imagining that , individual frequency channels are coming up with their own estimate , of these , these kinds of something like this . might not be , exact features that , jakobson thought of . but some , something like that . some low - level features , which are not , fully , phone classification . and the th this particular image , of how thi how it 's done , is that , then given all of these estimates at that level , there 's a level above it , then which is making , some sound unit classification such as , phone and , . you could argue what , what a sound unit should be , and . but that 's what i was imagining doing , and but it 's still open within that whether you would have an intermediate level in which it was actually phones , or not . you would n't necessarily have to . , but , again , i would n't wanna , would n't want what we produced to be so , know , local in perspective that it was matched , what we were thinking of doing one week , and and , what you 're saying is right . that , that if we , can we should put in , another level of , of description there if we 're gon na get into some of this low - level . phd d: , if we 're talking about , having the , annotators annotate these kinds of features , it seems like , you the the question is , do they do that on , meeting data ? or do they do that on , switchboard ? postdoc b: , i was thinking that it would be interesting , to do it with respect to , parts of switchboard anyway , in terms of , partly to see , if you could , generate first guesses at what the articulatory feature would be , based on the phone representation at that lower level . it might be a time gain . but also in terms of comparability of , phd d: cuz the , and then also , if you did it on switchboard , you would have , the full continuum of transcriptions . phd d: you 'd have it , from the lowest level , the ac acoustic features , then you 'd have the , the phonetic level that steve did , professor c: it 's so it 's a little different . so i we 'll see wha how much we can , get the people to do , and how much money we 'll have and all this thing , phd d: but it might be good to do what jane was saying , seed it , with , guesses about what we think the features are , based on , the phone or steve 's transcriptions . to make it quicker . grad f: alright , so based on the phone transcripts they would all be synchronous , but then you could imagine , nudging them here and there . professor c: what i ' m a l little behind in what they 're doing , now , and , the they 're doing on switchboard now . but that , steve and the gang are doing , something with an automatic system first and then doing some adjustment . as i re as i recall . so that 's probably the right way to go anyway , is to start off with an automatic system with a pretty rich pronunciation dictionary that , , tries , to label it all . and then , people go through and fix it . postdoc b: so in our case you 'd think about us s starting with maybe the regular dictionary entry , and then ? or would we professor c: , regular dictionary , this is a pretty rich dictionary . it 's got , got a fair number of pronunciations in it phd d: or you could start from the if we were gon na , do the same set , of sentences that steve had , done , we could start with those transcriptions . phd g: that 's actually what i was thinking , is tha the problem is when you run , if you run a regular dictionary , even if you have variants , in there , which most people do n't , you do n't always get , out , the actual pronunciations , so that 's why the human transcriber 's giving you the that pronunciation , phd g: we should catch up on what steve is , that would be a good i good idea . professor c: , so that i we also do n't have , we ' ve got a good start on it , but we do n't have a really good , meeting , recorder or recognizer or transcriber or anything yet , so , another way to look at this is to , do some on switchboard which has all this other , to it . and then , as we get , further down the road and we can do more things ahead of time , we can , do some of the same things to the meeting data . postdoc b: and i ' m and these people might they are , s most of them are trained with ipa . they 'd be able to do phonetic - level coding , or articulatory . postdoc b: , they 're interested in continuing working with us , so i , and this would be up their alley , so , we could when the when you d meet with , with john ohala and find , what taxonomy you want to apply , then , they 'd be , good to train onto it . grad f: to to , you 'd wanna iterate , somehow . it 's interesting thing to think about . phd g: it might be neat to do some , phonetic , features on these , nonword words . are are these kinds of words that people never the " " s and the " and the these k no , i ' m serious . there are all these kinds of functional , elements . i what you call them . but not just fill pauses but all kinds of ways of interrupting and . phd g: and some of them are , " - " s , and " " s , and , " ! " ok " , " grunts , that might be interesting . phd a: , , i worked a little bit on the presegmentation to get another version which does channel - specific , speech - nonspeech detection . and , what i did is i used some normalized features which , look in into the which is normalized energy , energy normalized by the mean over the channels and by the , minimum over the , other . within each channel . and to , to normalize also loudness and modified loudness and things and that those special features actually are in my feature vector . and , and , therefore to be able to , somewhat distinguish between foreground and background speech in the different in each channel . and , i tested it on three or four meetings and it seems to work , fairly , i would say . there are some problems with the lapel mike . , . grad f: so i understand that 's what you were saying about your problem with , minimum . phd a: as there are some problems in , when , in the channel , there they the speaker does n't talk much or does n't talk . then , the , there are some problems with n with normalization , and , then , there the system does n't work . so , i ' m glad that there is the digit part , where everybody is forced to say something , so , that 's great for my purpose . and , i , then the evaluation of the system is a little bit hard , as i do n't have any references . phd a: , that 's the one wh where i do the training on so i ca n't do the evaluation on so , can the transcribers perhaps do some , some meetings in terms of speech - nonspeech in the specific channels ? postdoc b: so , i might have done what you 're requesting , though i did it in the service of a different thing . postdoc b: i have thirty minutes that i ' ve more tightly transcribed with reference to individual channels . phd a: ok . ok , that 's great . that 's great for me . , so . postdoc b: so , e so the , we have the , th they transcribe as if it 's one channel with these with the slashes to separate the overlapping parts . and then we run it through then it then i ' m gon na edit it and i ' m gon na run it through channelize which takes it into dave gelbart 's form format . and then you have , all these things split across according to channel , and then that means that , if a person contributed more than once in a given , overlap during that time bend that two parts of the utterance end up together , it 's the same channel , and then i took his tool , and last night for the first thirty minutes of one of these transcripts , i , tightened up the , boundaries on individual speakers ' channels , cuz his interface allows me to have total flexibility in the time tags across the channels . and , so . phd a: so , that 's great , but what would be to have some more meetings , not just one meeting to be that , there is a system , grad f: , so if we could get a couple meetings done with that level of precision that would be a good idea . postdoc b: , ok . , how m much time so the meetings vary in length , what are we talking about in terms of the number of minutes you 'd like to have as your training set ? phd a: it seems to me that it would be good to have , a few minutes from different meetings , so . but i ' m not about how much . postdoc b: ok , now you 're saying different meetings because of different speakers or because of different audio quality or both or ? professor c: , we do n't have that much variety in meetings yet , we have this meeting and the feature meeting and we have a couple others that we have , couple examples of . but but , phd g: we can try running we have n't done this yet because , , andreas an is gon na move over the sri recognizer . i i ran out of machines at sri , cuz we 're running the evals and do n't have machine time there . but , once that 's moved over , hopefully in a couple days , then , we can take , what jane just told us about as , the presegmented , { nonvocalsound } the segmentations that you did , at level eight or som at some , threshold that jane , tha right , and try doing , forced alignment . , on the word strings . phd g: and if it 's good , then that will that may give you a good boundary . if it 's good , we do n't then we 're fine , phd g: but , i yet whether these , segments that contain a lot of pauses around the words , will work or not . phd a: i would quite like to have some manually transcribed references for the system , as i ' m not if it 's really good to compare with some other automatic , found boundaries . postdoc b: , no , if we were to start with this and then tweak it h manually , would that would be ok ? phd g: they might be ok . it it really depends on a lot of things , but , i would have maybe a transciber , look at the result of a forced alignment and then adjust those . phd g: that might save some time . if they 're horrible it wo n't help , but they might not be horrible . so but i 'll let when we , have that . postdoc b: how many minutes would you want from , we could easily , get a section , like say a minute or so , from every meeting that we have so f from the newer ones that we 're working on , everyone that we have . and then , should provide this . phd a: if it 's not the first minute of the meeting , that 's ok with me , but , in the first minute , often there are some strange things going on which are n't really , for , which are n't re really good . so . what what i 'd quite like , perhaps , is , to have , some five minutes of different meetings , postdoc b: somewhere not in the very beginning , five minutes , ok . and , then i wanted to ask you just for my inter information , then , would you , be trai cuz i do n't quite unders so , would you be training then , the segmenter so that , it could , on the basis of that , segment the rest of the meeting ? so , if i give you like five minutes is the idea that this would then be applied to , providing tighter time bands ? phd a: that 's but i hope that i do n't need to do it . so , it c can be do in an unsupervised way . so . phd a: i ' m not , but , for those three meetings whi which i did , it seems to be , quite , but , there are some as i said some problems with the lapel mike , but , perhaps we can do something with cross - correlations to , to get rid of the of those . that 's that 's what i that 's my future work . what i want to do is to look into cross - correlations for removing those , false overlaps . phd g: are the , wireless , different than the wired , mikes , ? , have you noticed any difference ? phd a: i ' m not , if there are any wired mikes in those meetings , or , i have to loo have a look at them but , i ' m there 's no difference between , postdoc b: ok , so then , if that 's five minutes per meeting we ' ve got like twelve minutes , twelve meetings , roughly , that i ' m that i ' ve been working with , then professor c: of of the meetings that you 're working with , how many of them are different , tha postdoc b: , just from what i ' ve seen , there are some where , you 're present or not present , and , then you have the difference between the networks group and this group professor c: so i did n't know in the group you had if you had so you have the networks meeting ? postdoc b: we could , you recorded one last week or so . i could get that new one in this week i get that new one in . professor c: and having as much variety for speaker certainly would be a big part of that . postdoc b: ok , so if i , ok , included include , ok , then , if i were to include all together samples from twelve meetings that would only take an hour and i could get the transcribers to do that right , what is , that would be an hour sampled , and then they 'd transcribe those that hour , right ? that 's what i should do ? postdoc b: i mean adjust . so they get it into the multi - channel format and then adjust the timebands so it 's precise . postdoc b: i did , , so , last night i did , gosh , last night , i did about half an hour in , three hours , which is not , terrific , but , anyway , it 's an hour and a half per phd a: do the transcribers actually start wi with , transcribing new meetings , or are they ? postdoc b: , they 're still working they still have enough to finish that i have n't assigned a new meeting , but the next , m i was about to need to assign a new meeting and i was going to take it from one of the new ones , and i could easily give them jerry feldman 's meeting , no problem . and , then professor c: they 're running out of data unless we s make the decision that we should go over and start , transcribing the other set . postdoc b: and so i was in the process of like editing them but this is wonderful news . postdoc b: we funded the experiment with , also we were thinking maybe applying that to getting the , that 'll be , very useful to getting the overlaps to be more precise all the way through . postdoc b: yes , it does . so , , liz , and don , and i met this morning , in the barco room , with the lecture hall , postdoc b: and this afternoon , it drifted into the afternoon , concerning this issue of , the , there 's the issue of the interplay between the transcript format and the processing that , they need to do for , the sri recognizer . and , , so , i mentioned the process that i ' m going through with the data , so , i get the data back from the transcri , s , metaphorically , get the data back from the transcriber , and then i , check for simple things like spelling errors and things like that . and , i ' m going to be doing a more thorough editing , with respect to consistency of the conventions . but they 're generally very good . and , then , i run it through , the channelize program to get it into the multi - channel format , ok . and the , what we discussed this morning , i would summarize as saying that , these units that result , in a particular channel and a particular timeband , at that level , vary in length . and , { nonvocalsound } their recognizer would prefer that the units not be overly long . but it 's really an empirical question , whether the units we get at this point through , just that process i described might be sufficient for them . so , as a first pass through , a first chance without having to do a lot of hand - editing , what we 're gon na do , is , i 'll run it through channelize , give them those data after i ' ve done the editing process and be it 's clean . and do that , pretty quickly , with just , that minimal editing , without having to hand - break things . and then we 'll see if the units that we 're getting , with the at that level , are sufficient . and maybe they do n't need to be further broken down . and if they do need to be further broken down then maybe it just be piece - wise , maybe it wo n't be the whole thing . so , that 's what we were discussing , this morning as far as i among also we discussed some adaptational things , postdoc b: so it 's like , i had n't , incorporated , a convention explicitly to handle acronyms , but if someone says , pzm it would be to have that be directly interpretable from , the transcript what they said , or pi - tcl tcl . it 's like y it 's and so , i ' ve incorporated also convention , with that but that 's easy to handle at the post editing phase , and i 'll mention it to , transcribers for the next phase but that 's ok . and then , a similar conv , convention for numbers . so if they say one - eighty - three versus one eight three . , and also i 'll be , encoding , as i do my post - editing , the , things that are in curly brackets , which are clarificational material . and to incorporate , keyword , at the beginning . so , it 's gon na be either a gloss or it 's gon na be a vocal sound like a , laugh or a cough , or , . or a non - vocal sound like a doors door - slam , and that can be easily done with a , just a one little additional thing in the , in the general format . phd g: we j we just needed a way to , strip , all the comments , all the things th the that linguist wants but the recognizer ca n't do anything with . , but to keep things that we mapped to like reject models , or , , mouth noise , or , cough . and then there 's this interesting issue jane brought up which i had n't thought about before but i was , realizing as i went through the transcripts , that there are some noises like , the good example was an inbreath , where a transcriber working from , the mixed , signal , does n't know whose breath it is , and they ' ve been assigning it to someone that may or may not be correct . and what we do is , if it 's a breath sound , a sound from the speaker , we map it , to , a noise model , like a mouth - noise model in the recognizer , and , it probably does n't hurt that much once in a while to have these , but , if they 're in the wrong channel , that 's , not a good idea . and then there 's also , things like door - slams that 's really in no one 's channel , they 're like it 's in the room . and , jane had this , idea of having , like an extra , couple tiers , phd g: and we were thinking , that is useful also when there 's uncertainties . so if they hear a breath and they who breath it is it 's better to put it in that channel than to put it in the speaker 's channel because maybe it was someone else 's breath , or , so that 's a good you can always clean that up , post - processing . so a lot of little details , but we 're , coming to some kinda closure , on that . so the idea is then , don can take , jane 's post - processed channelized version , and , with some scripts , convert that to a reference for the recognizer and we can , can run these . so when that 's , ready , as soon as that 's ready , and as soon as the recognizer is here we can get , twelve hours of force - aligned and recognized data . and , start , working on it , so we 're , a coup a week or two away i would say from , if that process is automatic once we get your post - process , transcript . postdoc b: and that does n't the amount of editing that it would require is not very much either . i ' m just hoping that the units that are provided in that way , { nonvocalsound } will be sufficient cuz i would save a lot of , time , dividing things . phd g: , some of them are quite long . just from how long were you did one ? grad e: i saw a couple , around twenty seconds , and that was just without looking too hard for it , so , i would imagine that there might be some that are longer . postdoc b: n one question , e w would that be a single speaker or is that multiple speakers overlapping ? grad e: no . no , but if we 're gon na segment it , like if there 's one speaker in there , that says " ok " , right in the middle , it 's gon na have a lot of dead time around it , phd g: right . it 's not the it 's not the fact that we ca n't process a twenty second segment , it 's the fact that , there 's twenty seconds in which to place one word in the wrong place phd g: , if someone has a very short utterance there , and that 's where , we , might wanna have this individual , ha have your pre - process input . phd a: i that perhaps the transcribers could start then from the those mult multi - channel , speech - nonspeech detections , if they would like to . postdoc b: in in doing the hand - marking ? that 's what i was thinking , too . phd g: so that 's probably what will happen , but we 'll try it this way and see . it 's probably good enough for force - alignment . if it 's not then we 're really then we def definitely , but for free recognition i ' m it 'll probably not be good enough . we 'll probably get lots of errors because of the cross - talk , and , noises and things . grad f: right , we don so the only thing we 'll have extra now is just the lapel . not not the , bodypack , just the lapel . grad f: , and then one of the one of those . since , what i decided to do , on morgan 's suggestion , was just get two , new microphones , and try them out . and then , if we like them we 'll get more . grad f: since they 're like two hundred bucks a piece , we wo n't , at least try them out . grad f: it 's , it 's by crown , and it 's one of these mount around the ear thingies , and , when i s when i mentioned that we thought it was uncomfortable he said it was a common problem with the sony . and this is how a lot of people are getting around it . and i checked on the web , and every site i went to , raved about this particular mike . it 's comfortable and stays on the head , so we 'll see if it 's any good . but , it 's promising . professor c: for the recor for the record adam is not a paid employee or a consultant of crown . professor c: i said " for the record adam is not a paid consultant or employee of crown " . grad f: these are crown are n't they ? the p z ms are crown , they were . professor c: so if we go to a workshop about all this it 's gon na be a meeting about meetings . ###summary: the berkeley meeting recorder group discussed the collection status for a set of connected digits recordings that are nearly complete and ready to be trained on a recognizer. anticipated results were discussed in reference to results obtained for other digits corpora , i.e . aurora and ti-digits. the group also considered the prospect of performing fine-grained acoustic-phonetic analyses on a subset of meeting recorder digits or switchboard data. pre-segmentation manipulations that allow for the segmentation of channel-specific speech/non-speech portions of the signal and the distinction of foreground versus background speech were discussed. finally , speaker fe008 and fe016 reported on new efforts to adapt transcriptions to the needs of the sri recognizer , including conventions for encoding acronyms , numbers , ambient noise , and unidentified inbreaths. the group decided to delegate the extraction of digits to the transcriber pool. a tentative decision was also made to delegate transcribers with the task of labelling a subset of digits or switchboard data for fine-grained acoustic-phonetic features. speaker fe008 will run selected meeting recorder data through channelize and determine whether the resulting units are of a sufficient length. with respect to encoding more fine-grained acoustic information in transcriptions , the question was posed: which features should be marked? speaker mn014 reported problems pre-segmenting speech recorded via the lapel microphones. normalization of the energy measured across and within channels is problematic when performed for speakers who say little or nothing during meetings. the evaluation of pre-segmented data is difficult without tightly transcribed time references to the individual channels from which the speech was derived. the sri recognizer requires that multi-channel format units not be too large , indicating that some additional pre-processing of unit lengths may be necessary. a test set of meeting recorder digits is nearly complete. future work will include training this data on a recognizer , and feeding the recognizer with corresponding far-field microphone data. it was noted that the results of experiments testing similar digits corpora have yielded high error rates , indicating that similar problems may be expected for the set of meeting recoreder digits. the group discussed the prospect of performing fine-grained acoustic-phonetic analyses on a subset of digits or switchboard data. it was suggested that prior to the use of data-driven methods , knowledge-driven approaches should be used to 'seed' the data with sub-phonemic features , either manually , or using a rich pronunciation dictionary. a new version of the pre-segmentation tool that segments channel-specific speech/non-speech portions of the signal has been developed and tested. future pre-segmentation work will include normalizing other features , such as loudness , enabling the distinction of foreground versus background speech. speaker mn014 will also look at cross-correlations for removing false overlaps. new efforts were reported to adapt transcriptions to the needs of the sri recognizer , including conventions for encoding acronyms , numbers , ambient noise , and unidentified inbreaths. with the arrival of the sri recognizer , 12 hours of forced aligned , recognized data can be expected.
27
professor b: ok . ami , do yours then we 'll open it and it 'll be enough . grad a: mmm does n't , it should be the other way . , now it 's on . professor b: alright . anyway . so , before we get started with the , technical part , want to review what is happening with the our data collection . professor b: so , probably after today , that should n't come up in this meeting . th - this is s should be i m it is n't there 's another thing going on of gathering data , and that 's independent of this . but , want to make we 're all together on this . what we think is gon na happen is that , in parallel starting about now we 're gon na get fey to , where you 're working with me and robert , draft a note that we 're gon na send out to various cogsci c and other classes saying , " here 's an opportunity to be a subject . contact fey . " and then there 'll be a certain number of , hours during the week which she will be available and we 'll bring in people . , roughly how many , robert ? we d do we know ? professor b: ok . so , we 're looking for a total of fifty people , not necessarily by any means all students but we 'll s we 'll start with that . in parallel with that , we 're gon na need to actually do the script . and , so , i there 's a plan to have a meeting friday afternoon , with , jane , and maybe liz and whoever , on actually getting the script worked out . but what i 'd like to do , if it 's o k , is to s to , as i say , start the recruiting in parallel and possibly start running subjects next week . the week after that 's spring break , and maybe we 'll look for them some subjects next door or i grad c: also , f both fey and i will , do something of which i may , kindly ask you to do the same thing , which is we gon na check out our social infrastructures for possible subjects . meaning , kid children 's gymnastic classes , pre - school parents and . they also sometimes have flexible schedules . so , if you happen to be in a non - student social setting , and people who may be interested in being subjects we also considered using the berkeley high school and their teachers , maybe , and get them interested in . grad c: but i will just make a first draft of the , note , the " write - up " note , send it to you and fey and then grad c: and , are we have we concurred that , these forms are sufficient for us , and necessary ? professor b: , . there 's one tricky part about , they have the right i the last paragraph " if you agree to participate you have the opportunity to have anything excised which you would prefer not to have included in the data set . " ok ? now that , we had to be included for this other one which might have , meetings , about something . professor b: in this case , it does n't really make sense . , so what i 'd like to do is also have our subjects sign a waiver saying " i do n't want to see the final transcript " . and if they do n't if they say " no , i ' m not willing to sign that " , then we 'll show them the final transcript . but , . professor b: that , so we might actually , s i jane may say that , " , you ca n't do this " , " on the same form , we need a separate form . " but anyway . i 'd like to , e , add an a little thi a thing for them to initial , saying " nah , do i do n't want to see the final transcript . " but other than that , that 's one 's been approved , this really is the same project , rec and . so we just go with it . grad c: ok . so much for the data , except that with munich everything is fine now . they 're gon na transcribe . they 're also gon na translate the , german data from the tv and cinema for andreas . they 're they all seem to be happy now , with that . w c sh should we move on to the technical sides ? i the good news of last week was the parser . so , bhaskara and i started working on the parser . then bhaskara went to class and once he came back , it was finished . it , i did n't measure it , but it was about an hour and ten minutes . and , and now it 's we have a complete english parser that does everything the german parser does . grad e: what did you end up having to do ? , wha was there anything interesting about it ? grad c: , w we d the first we did is we tried to do change the " laufen " into " run " , or " running " , or " runs " . and we noticed that whatever we tried to do , it no effect . grad c: and , the reason was that the parser i c completely ignores the verb . so this sentence is parses the p the same output , grad c: that 's what you need . if if you 'd add today and evening , it 'll add time or not . grad c: so it i it does look at that . but all the rest is p simply frosting on the cake , and it 's optional for that parser . professor b: so , you can sho you you are are you gon na show us the little templates ? grad c: . we ar we can sh er show you the templates . i also have it running here , grad c: so if i do this now , you can see that it parsed the wonderful english sentence , " which films are on the cinema today evening ? " but , . grad c: it could be " this evening , which films are on the cinema " , or " running in the cinema , which " , " today evening " , i " is anything happening in the cinema this evening ? " professor b: actually , it 's a little tricky , in that there 's some allowable german orders which are n't allowable english orders and . and it is order - based . so it is n't it ? grad c: , these sentences are just silly . , d these were not the ones we actually did it . what 's an idiomatic of phrasing this ? which films are showing ? grad d: you want to get it ? or is di was it easy to get it ? grad c: ok . so . wonderful parse , same thing . except that we d w we do n't have this , time information here now , which is , . this are the reserve . anyways . so . these are the ten different sentence types that the parser was able to do . and it still is , now in english . and , you have already to make it a little bit more elaborate , right ? grad d: , i changed those sentences to make it , more , idiomatic . and , you can have i many variations in those sentences , they will still parse fine . so , in a sense it 's pretty broad . grad c: so , if you want to look at the templates , they 're conveniently located in a file , " template " . , and this is what i had to do . i had to change , @ " spielfilm " to " film " , " film " to " movie " , cinem " kino " to " cinema " to " today " heu " heute " to " today " , evening " abend " to " evening " grad d: one thing i was wondering , was , those functions there , are those things that modify the m - three - l ? ok . grad c: but we 'll get to that in a second . and so this means , this and " see " are not optional . want i like is all maybe in there , but may also not be in there . professor b: so so , if it says " this " and " see " , it also will work in " see " and " this " ? in the other order ? with those two key words ? grad c: action watch , whatever . nothing was specialfi specified . except that it has some references to audio - visual media here . grad c: where it gets that from it 's correct , but i where it gets it from . grad d: one thing i was wondering was , those percentage signs , right ? so , why do we even have them ? because if you did n't have them grad c: , i 'll tell you why . because it gives a you a score . and the value of the score is , v i assume , i , the more of these optional things that are actually in there , the higher the r score it is . grad c: so we should n't belittle it too much . it 's doing something , some things , and it 's very flexible . i ' ve just tried to be . grad c: ok . , let 's hope that the generation will not be more difficult , even though the generator is a little bit more complex . but we 'll mmm , that means we may need two hours and twenty minutes rather than an hour ten minutes , i hope . grad c: and the next thing i would like to be able to do , and it seems like this would not be too difficult either , is to say , " ok let 's now pretend we actually wanted to not only change the mapping of , words to the m - three - l but we also wanted to change add a new sentence type and make up some new m - three - l s " professor b: so that 'd be great . it would be a good exercise to just see whether one can get that to run . grad d: fine , . , so where are those functions " action " , " goodbye " , and so on , right ? are they actually , are they going to be called ? , are they present in the code for the parser ? grad c: . what it does , it i it does something fancy . it loads it has these style sheets and also the , schemata . so what it probably does , is it takes the , is this where it is ? this is already the xml ? this is where it takes its own , syntax , and converts it somehow . where is the grad c: , where it actually produces the xml out of the , parsed . no , this is not it . i ca n't find it now . you mean , where the act how the action " goodbye " maps into something grad c: this is what happens . this is what you would need to change to get the , xml changed . so when it encounts encounters " day " , it will , activate those h classes in the xml but , i saw those actions , the " goodbye " somewhere . , . grad d: mmm . m - three - l dot dtd ? that 's just a specification for the xml format . grad c: , we 'll find that out . so whatever n this does this is , looks l to me like a function call , right ? grad c: so , whenever it encounters " goodbye " , which we can make it do in a second , here grad d: each of those functions act on the current xml structure , and change it in some way , by adding a l a field to it , . professor b: y . they also seem to affect state , some of them there were other actions , that s seemed to step state variables somewhere , like the n s " discourse status confirm " . ok . so that 's going to be a call on the discourse and confirm that it 's grad c: and so whenever say , " write " , it will put this in here . professor b: , so it always just is it so it , go back , then , cuz it may be that all those th things , while they look like function calls , are just a way of adding exactly that to the xml . grad c: , we 'll see , when we say , let 's test something , goodbye , causes it to c to create an " action goodbye - end - action " . which is a means of telling the system to shut down . now , if we know that " write " produces a " feature discourse - status confirm discourse - status " . so if i now say " write , goodbye , " it should do that . it sho it creates this , grad d: right there . but there is some function call , because how does it know to put goodbye in content , but , confirm in features ? professor b: good point . it 's it 's the it 's under what sub - type you 're doing it . grad a: , it just automatically initializes things that are common , right ? so it 's just a shorthand . grad c: , this is german . so , now , this , it can not do anymore . nothing comes out of here . grad c: so , it does n't speak german anymore , but it does speak english . and there is , here , a reference so , this tells us that whatever is has the id " zero " is referenced here by @ the restriction seed and this is exa " i want " what was the sentence ? grad c: need two seats here . nuh . and where is it playing ? there should also be a reference to something , maybe . our d this is re here , we change and so , we here we add something to the discourse - status , that the user wants to change something that was done before and that , whatever is being changed has something to do with the cinema . grad a: so then , whatever takes this m - three - l is what actually changes the state , not the , ok . professor b: no , right , the discourse maintainer , i see . and it and it runs around looking for discourse status tags , and doing whatever it does with them . and other people ignore those tags . so , . i definitely think it 's it 's worth the exercise of trying to actually add something that is n't there . professor b: , a kid understanding what 's going on . then the next thing we talked about is actually , figuring out how to add our own tags , and like that . grad c: point number two . i got the , m - three - l for the routes today . , so i got some more . this is the , interesting . it 's just going up , it 's not going back down . so , this is , what i got today is the new m - three - l for , the maps , and with some examples so , this is the xml and this is what it will look like later on , even though it you ca n't see it on this resolution . and this is what it is the structure of map requests , also not very interesting , and here is the more interesting for us , is the routes , route elements , and , again , as we thought it 's really simple . this is the , , parameters . we have @ simple " from objects " and " to objects " and , points of interest along the way i asked them whether or not we could , first of all , i was little bit it seemed to me that this m way of doing it is a stack a step backwards from the way we ' ve done it before . t it seems to me that some notions were missing . professor b: so this is not a complicated negotiation . there 's there 's not seven committees , or anything , right ? professor b: great . so this is just trying to it 's a design thing , not a political thing . once we ' ve we can just agree on what oughta be done . good . grad c: exactly . however , the , e so that you understand , it is really simple . you you have a route , and you cut it up in different pieces . and every element of that e r f of that every segment we call a " route element " . and so , from a to b we cut up in three different steps , and every step has a " from object " where you start , a " to object " where y where you end , and some points of interest along the way . what w i was missing here , and , maybe it was just me being too stupid , is , i did n't get the notion of the global goal of the whole route . really , s was not straightforward visibly for me . and some other . and i suggested that they should n be k , kind enough to do s two things for us , is one , also allocating , some tags for our action schema enter - vista - approach , and and also , since you had suggested that , we figure out if we ever , for a demo reason , wanted to shortcut directly to the g gis and the planner , of how we can do it . now , what 's the state of the art of getting to entrances , what 's the syntax for that , how get getting to vista points and calculating those on the spot . and the approach mode , anyhow , is the default . that 's all they do it these days . wherever you 'll find a route planner it n does nothing but get to the closest point where the street network is at minimal distance to the geometric center . professor b: - . so , let now , this is important . let , i want a again , outside of m almost managerial point , you 're in the midst of this , so better . but it seems to me it 's probably a good idea to li minimize the number of , change requests we make of them . so it seemed to me , what we ought to do is get our story together . and think about it some , internally , before asking them to make changes . does this does this make sense to you guys ? it you 're doing the interaction but it seemed to me that what we ought to do is come up with a , something where you , and i who 's mok working most closely on it . probably johno . , take what they have , send it to everybody saying " this is what they have , this is what we think we should add " , and then have a d a an iteration within our group saying " , " and get our best idea of what we should add . and then go back to them . is i or , i does this make sense to you ? or grad c: . especially if we want , what i my feeling was we reserved something that has a r an ok label . that 's th that was my th first step . i w no matter how we want to call it , this is our playground . and if we get something in there that is a structure elaborate and complex enough to maybe enable a whole simulation , one of these days , that would be u the perfect goal . professor b: that 's right . so , . the problem is n't the short ra range optimization . it 's the o one or two year thing . ok . what are the thl class of things we think we might try to do in a year or two ? how how would we try to characterize those and what do we want to request now that 's leave enough space to do all that ? and that re that requires some thought . and so that sounds like a great thing to do as the priority item , as soon as we can do it . so y so you guys will send to the rest of us a version of , this , and the , description professor b: tu , tur change the description to , english and , then then , . then , with some sug s suggestions about where do we go from here ? , this and this , was just the action end . , at some point we 're going to have to worry about the language end . but for the moment just , t for this class of things , we might want to try to encompass . and grad a: then the scope of this is beyond approach and vis - or vista . , . professor b: , yeah . this is this is everything that , , we might want to do in the next couple years . grad a: but i ' m just but the so this xml here just has to do with source - path - goal type , in terms of traveling through heidelberg . or travel , specifically . so , but this o is the domain greater than that ? professor b: no . i think the i the idea is that . it 's beyond source - path - goal , but we do n't need to get beyond it @ tourists in heidelberg . it seems to me we can get all the complexity we want in actions and in language without going outside of tourists in heidelberg . but , i depending on what people are interested in , one could have , tours , one could have , explanations of why something is , why was this done , or , no there 's no end to the complexity you can build into the , what a tourist in heidelberg might ask . so , at least unless somebody else wants t to suggest otherwise the general domain we do n't have t to , broaden . that is , tourists in heidelberg . and if there 's something somebody comes up with that ca n't be done that way , then , . w we 'll look at that , but i 'd be s i 'd be surprised at if there 's any important issue that and , if you want to , push us into reference problems , that would be great . professor b: ok , so this is his specialty is reference , and , what are these things referring to ? not only anaphora , but , more generally the , this whole issue of , referring expressions , and , what is it that they 're actually dealing with in the world ? and , again , this is li in the databa this is also pretty formed because there is an ontology , and the database , and . so it is n't like , , the evening star or like that . i i it all the entities do have concrete reference . although th the to get at them from a language may not be trivial . there are n't really deep mysteries about , what w what things the system knows about . professor b: you have proper names , and descriptions . and a l and a lot and anaphora , and pronouns , grad c: now , we hav the whole unfortunately , the whole database is , in german . we have just commissioned someone to translate some bits of it , ie the e the shortest k the more general descriptions of all the objects and , persons and events . so , it 's a relational database with persons , events , and , objects . and it 's quite , there . but did y i there will be great because the reference problem really is not trivial , even if you have such a g - defined world . grad a: could you give me an example of a reference problem ? so l make it more concrete ? grad c: how do i get to the powder - tower ? we t think that our bit in this problem is interesting , but , just to get from powder - tower to an object i id in a database is also not really trivial . phd f: or or if you take something even more scary , " how do i get to the third building after the tower ? the ple - powder - tower ? " , you need some mechanism for professor b: or you can say " how " , " how do i get back ? " and , again , it 's just a question of which of these things , people want to dive into . what , i ' m gon na try to do , and i , pwww ! let 's say that by the end of spring break , i 'll try to come up with some general story about , construction grammar , and what constructions we 'd use and how all this might fit together . there 's this whole framework problem that i ' m feeling really uncomfortable about . and i have n't had a chance to think about it . but i want to do that early , rather than late . and you and i will probably have to talk about this some . grad c: u that 's what strikes me , that we the de g , small something , maybe we should address one of these days , is to that most of the work people actually always do is look at some statements , and analyze those . whether it 's abstracts or newspapers and like this . but the whole i is it really relevant that we are dealing mostly with , questions ? grad c: and this is it seems to me that we should maybe at least spend a session or brainstorm a little bit about whether that l this is special case in that sense . i . did we ever find m metaphorical use in questions in that sense , really ? professor b: , we could take all the standard metaphor examples and make question versions of them . professor b: , or , . " wh - why is he pushing for promotion ? " or , " who 's pushing proof " er , just pick any of them and just do the so i do n't think , it 's difficult , to convert them to question forms that really exist and people say all the time , and we how to handle them , too . right ? , it 's i d it we how to handle the declarative forms , @ really , and , then , the interrogative forms , - . grad e: . it 's just that the goals are g very different to cases so we had this problem last year when we first thought about this domain , actually , was that most of the things we talked about are our story understanding . , we 're gon na have a short discourse and the person talking is trying to , i , give you a statement and tell you something . and here , it 's th grad e: and then here , y you are j , the person is getting information and they or may not be following some larger plan , that we have to recognize or , infer . and th the their discourse patterns probably { nonvocalsound } do n't follo follow quite as many logical connec professor b: right . no , that 's one of things that 's interesting , is in this over - arching story we worked it out for th as you say , this the storytelling scenario . and it 's really worth thinking through what it looks like . what is the simspec mean , et cetera . grad e: m right . cuz for a while we were thinking , " , how can we change the , data to illicit tha illicit , actions that are more like what we are used to ? " but we would rather , try to figure out what 's , professor b: , i . , maybe that 's what we 'll do is s u e we can do anything we want with it . , once we have fulfilled these requirements , professor b: ok , and the one for next , summer is just half done and then the other half is this , " generation thing " which we think is n't much different . so once that 's done , then all the rest of it is , , what we want to do for the research . and we can w we can do all sorts of things that do n't fit into their framework . th - there 's no reason why we 're c we 're constrained to do that . if we can use all the , execution engines , then we can , really { nonvocalsound } try things that would be too much pain to do ourselves . but there 's no obligation on any of this . so , if we want to turn it into u understan standing stories about heidelberg , we can do that . , that would just be a t a grad c: or , we need and if we ' r take a ten year perspective , we need to do that , because w e w a assuming we have this , we ta in that case we actually do have these wonderful stories , and historical anecdotes , and knights jumping out of windows , grad c: and - and tons of . so , th the database is huge , and if we want to answer a question on that , we actually have to go one step before that , and understand that . in order to e do sensible information extraction . grad c: and so , this has been a deep map research issue that was is part of the unresolved , and to - do 's , and something for the future , is how can we run our text , our content , through a machine that will enable us , later , to retrieve or answer e questions more sensibly ? phd f: so , i was just going to ask , so , what is the basic thing that you are , obligated to do , , by the summer before w y c we can move professor b: so , what happened is , there 's this , robert was describing the there 's two packages there 's a , quote parser , there 's a particular piece of this big system , which , in german , takes these t sentence templates and produces xml structures . and one of our jobs was to make the english equivalent of that . that , these guys did in a day . the other thing is , at the other end , roughly at the same level , there 's something that takes , x m l structures , produces an output xml structure which is instructions for the generator . and then there 's a language generator , and then after that a s a synthesizer that goes from an xml structure to , language generation , to actual specifications for a synthesizer . , but again , there 's one module in which there 's one piece that we have to convert to english . professor b: is that and that but as i say , this is all along was viewed as a m a minor thing , necessary , but not and much more interesting is the fact that , as part of doing this , we are , inheriting this system that does all these other things . professor b: not precisely what we want , and that 's wh where it gets difficult . and i do n't pretend to understand yet what we really ought to do . grad c: so , e enough of that , but i , , mmm , the e , johno and i will take up that responsibility , and , get a first draft of that . now , we have just , two more short things . , y you guys started fighting , on the bayes - net " noisy - or " front ? grad d: , i should , talk a little bit about that , because that might be a good , architecture to have , in general for , problems with , multiple inputs to a node . professor b: good ! ok . and what 's the other one ? so that just we the d agenda is ? professor b: i ' ve got a couple new wu papers as . , so i ' ve been in contact with wu , so , probably let 's put that off till i understand better , what he 's doing . it 's just a little embarrassing all this was in his thesis and i was on his thesis committee , and , so , i r really knew this at one time . professor b: but , i it 's not only is part of what i have n't figured out yet is how all this goes together . so i 'll dig up some more from dekai . and so why do n't we just do the , grad d: so , recall that , we want to have this structure in our bayes - nets . namely , that , you have these nodes that have several bands , right ? does , they the typical example is that , these are all a bunch of cues for something , and this is a certain effect that we 'd like to conclude . so , like , let 's just look at the case when , this is actually the final action , right ? so this is like , , touch , grad d: , e - eva , right ? enter , v view , approach , right ? grad d: and say , i , it could be , like this is n't the way it really is , but let me say that , suppose someone mentioned , admission fees , it takes too long . try let me just say " landmark " . if a landmark , then there 's another thing that says if it 's closed or not , at the moment . alright , so you have nodes . right ? and the , problem that we were having was that , given n - nodes , there 's " two to the n " given n - nodes , and furthermore , the fact that there 's three things here , we need to specify " three times " , " two to the n " probabilities . that 's assuming these are all binary , which f they may not be . , they could be " time of day " , in which case we could , say , " morning , afternoon , evening , night " . so , this could be more so , it 's a lot , anyway . and , that 's a lot of probabilities to put here , which is a pain . so noisy - ors are a way to , deal with this . where should i put this ? so , the idea is that , let 's call these , c - one , c - two , c - three , and c - four , and e , for and effect , i . the idea is to have these intermediate nodes . , actually , the idea , first of all , is that each of these things has a quote - unquote distinguished state , which means that this is the state in which we do n't really know anything about it . right ? so , if we do n't really know if a landmark or not , or , i if that just does n't seem relevant , then that would be th the disting - the distinguish state . it 's a really , if there is something for the person talking about the admission fee , if they did n't talk about it , that would be the distinguish state . grad d: , . that 's just what they the word they used in that paper . so , the idea is that , you have these intermediate nodes , right ? e - one , e - two , e - three and e - four ? grad d: so the idea is that , each of these ei is represents what this would be if all the other ones were in the distinguish state . right ? so , suppose that the person , suppose the thing that they talked about is a landmark . but none of the other cues really apply . then , this would be w the this would just represent the probability distribution of this , assuming that this cue is turned on and the other ones just did n't apply ? so , if it is a landmark , and no none of the other things really ap applicable , then this would represent the probability distribution . so maybe in this case maybe we just t k maybe we decide that , if the thing 's a landmark and we anything else , then we 're gon na conclude that , they want to view it with probability , point four . they want to enter it with probability , with probability point five and they want to approach it probability point one , say so we come up with these l little tables for each of those and the final thing is that , this is a deterministic function of these , so we do n't need to specify any probabilities . we just have to , say what function this is , right ? so we can let this be , g of e - one comma e - two . e - three , e - four . right ? and our example g would be , a majority vote ? professor b: . ok , so th so the important point is w not what the g function is . the important point is that there is a general idea of shortcutting the full cpt . th - c the full conditional probability table with some function . which y w you choose appropriately for each case . so , depending on what your situation is , there are different functions which are most appropriate . so i gave bhaskara a copy of this , " ninety - two " paper . d and you got one , robert . i who else has seen it . professor b: it 's short . so , i u w , yo you have you read it yet ? professor b: ok , so you should take a look . nancy , i ' m you read it at some point in life . professor b: anyway . so the paper is n't th is n't real hard . one of the questions just come at bhaskara is , " how much of this does javabayes support ? " grad d: , it 's a good question . { nonvocalsound } the so what we want , is javabayes to support deterministic , functions . and , in a sense it sup we can make it supported by , manually , entering , probabilities that are one and zeros , right ? professor b: right . so the little handout that the little thing that i sent a message saying , here is a way to take one thing you could do , which is s in a way , stupid , is take this deterministic function , and use it to build the cpt . so , if ba - javabayes wo n't do it for you , that you can convert all that into what the cpt would be . and , what i sent out about a week ago , was an idea of how to do that , for , evidence combination . so one of one function that you could use as your " g function " is an e evidence - combining . so you just take the , if each of th if each of the ones has its own little table like that , then you could take the , strength of each of those , times its little table , and you 'd add up the total evidence for " v " , " e " , and " a " . grad d: i do n't think you can do this , because g is a function from that to that . so there 's no numbers . there 's just quadruplets of , n - duplets of , e vs . professor b: i i no , no but i ' m saying is there is a w , if y if you decide what 's what is appropriate , is probablistic evidence combination , you can write a function that does it . it 's a pui it 's actually one of the examples he 's got in there . but , anyway , s skipping the question of exactly which functions now is it clear that you might like to be able to shortcut the whole conditional probability table . grad c: , in some it seems very plausible in some sense , where we will be likely to not be observe some of the . cuz we do n't have the a access to the information . professor b: that 's one of the problems , is , w is is , where would th where would it all come from ? grad c: i if it 's a discar discourse initial phrase , we will have nothing in the discourse history . so , if we ever want to wonder what was mention grad d: . a are you saying that we 'll not be able to observe certain nodes ? that 's fine . that is orthogonal thing . professor b: , so there 's two separate things , robert . the f the bayes - nets in general are quite good at saying , " if you have no current information about this variable just take the prior for that . " ok ? th - that 's what they 're real good at . so , if you do n't have any information about the discourse , you just use your priors of whatever the discourse , whatever w it 's probabilistically , whatever it would be . and it 's not a great estimate , but it 's the best one you have , and , . so that , they 're good at . but the other problem is , how do you fill in all these numbers ? and that 's the one he was getting at . grad d: so , specifically in this case you have to f have this many numbers , whereas in this case you just have to have three for this , three for this . right ? so you have to have just three n ? so , this is much smaller than that . grad e: so , you do n't need da data enough to cover , nearly as much . grad a: so , really , i what a a noisy - or seems to " neural - net - acize " these bayes - nets ? professor b: to some so , " noisy - or " is a funny way of referring to this , because the noisy - or is only one instance . professor b: that one actually is n't a noisy - or . so we 'll have to think of a way t grad a: , my point was more that we just with the neural net , right , things come in , you have a function that combines them professor b: , it tha - that 's true . it is a is also more neural - net - like , although , it is n't necessarily sum , s , sum of weights or anything like that . professor b: i you could have , like the noisy - or function , really is one that 's essentially says , take the max . professor b: but anyway . and , i thi that 's the standard way people get around the there are a couple other ones . there are ways of breaking this up into s to subnets and like that . but , the we definitely it 's a great idea tha to pursue that . grad c: wha - still leaves one question . it you can always see easily that i ' m not grasping everything correctly , but what seemed attractive to me in i m in the last discussion we had , was that we find out a means of getting these point four , point five , point one , of c - four , not because , a is a landmark or not , but we label this whatever object type , and if it 's a garden , it 's point three , point four , point two . if it 's a castle , it 's point eight , point one . if it 's , a town hall , it 's point two , point three , point five . and we do n't want to write this down necessarily every time for something but , let 's see . grad d: it 'll be students where else would it be stored ? that 's the question . grad c: , in the beginning , we 'll write up a flat file . we know we have twenty object types and we 'll write it down in a flat file . professor b: so , i is , let me say something , guys , cuz there 's not there 's a pretty point about this we might as get in right now . which is the hierarchy that s comes with the ontology is just what you want for this . so that , if about it let 's say , a particular town hall that , it 's one that is a monument , then , that would be stored there . if you do n't , you look up the hierarchy , so , you may or so , then you 'd have this little vector of , , approach mode or eva mode . let 's ok , so we have the eva vector for various kinds of landmarks . if it for a specific landmark you put it there . if you do n't , you just go up the hierarchy to the first place you find one . professor b: , or , link to but in any case i view it logically as being in the ontology . it 's part of what about a an object , is its eva vector . and , if yo as i say , if about a specific object , you put it there . this is part of what dekai was doing . so , when we get to wu , the - e we 'll see w what he says about that . and , then if you if it is n't there , it 's higher , and if you anything except that it 's a b it 's a building , then up at the highest thing , you have the pr what amounts to a prior . if you anything else about a building , you just take whatever your crude approximation is up at that level , which might be equal , or whatever it is . so , that 's a very pretty relationship between these local vectors and the ontology . and it seems to me the obvious thing to do , unless we find a reason to do something different . does this make sense to you ? bhask - ? grad d: so , we are but we 're not doing the ontology , so we have to get to whoever is doing the u ultimately , professor b: so , that 's another thing we 're gon na need to do , is , to , either professor b: we 're gon na need some way to either get a p tag in the ontology , or add fields , or some way to associate or , w it may be that all we can do is , some of our own hash tables that it th - the th , there 's always a way to do that . it 's a just a question of grad c: but it 's , it strikes me as a what for if we get the mechanism , that will be the wonderful part . and then , how to make it work is the second part , in the sense that , m the guy who was doing the ontology , s ap apologized that i it will take him another through two to three days because they 're having really trouble getting the upper level straight , right now . the reason is , given the craw bet , the projects that all carry their own taxonomy and , on all history , they 're really trying to build one top level ontology ft that covers all the eml projects , and that 's , a tough cookie , a little bit tougher than they figured . i could have told them s so . but , nevertheless , it 's going to be there by n by , next monday and i will show you what 's what some examples from that for towers , and . what i do n't think is ever going to be in the ontology , is , the likelihood of , people entering r town halls , and looking at town halls , and approaching town halls , especially since we are b dealing with a case - based , not an instance - based ontology . so , there will be nothing on that town hall , or on the berkeley town hall , or on the heidelberg town hall , it 'll just be information on town halls . grad c: , that 's hhh . that 's that 's al different question . , th the first , they had to make a design question , " do we take ontologies that have instances ? or just one that does not , that just has the types ? " and , so , since the d decision was on types , on a d simply type - based , we now have to hook it up to instances . this is grad c: , but the ontology is really not a smartkom thing , in and of itself . that 's more something that i kicked loose in eml . so it 's a completely eml thing . professor b: i understand , but is anybody doing anything about it ? it 's a political problem . we wo n't worry about it . grad c: no , but th the r i th i still think that there is enough information in there . , whether ok . so , th it will know about the twenty object types there are in the world . let 's assume there are only twenty object types in this world . and it will know if any of those have institutional meanings . so , in a sense , " i " used as institutions for some s in some sense or the other . which makes them enterable . right ? in a sense . professor b: anyway . so we may have to this is with the whole thing , we may have to build another data stru conceptually , we should be done . when we see what people have done , it may turn out that the easiest thing to do is to build a separate thing that just pools i like , i it may be , that , the instance w that we have to build our own instance , things , that , with their types , professor b: and then it goes off to the ontology once you have its type . so we build a little data structure and so what we would do in that case , is , in our instance gadget have our e v and if we d there is n't one we 'd get the type and then have the e v as for the type . so we 'd have our own little , eva tree . and then , for other , vectors that we need . so , we 'd have our own little things so that whenever we needed one , we 'd just use the ontology to get the type , and then would hash or whatever we do to say , " ! if it 's that type of thing , and we want its eva vector , pppt - pppt ! it 's that . " so , we can handle that . and then but , the combination functions , and whether we can put those in java bayes , and all that , is the bigger deal . that 's where we have to get technically clever . grad a: we could just steal the classes in javabayes and then interface to them with our own code . professor b: , it 's , e cute . , you ' ve been around enough to just ? professor b: , there 's this huge package which may or may not be consistent but , we could look at it . professor b: it 's b it 's an inter a it , it 's an interpreter and i it expects its data structures to be in a given form , and if you say , " hey , we 're gon na make a different data structure to stick in there " grad a: , no , but that just means there 's a protocol , right ? that you could professor b: it may or may not . i . that 's the question is " to what extent does it allow us to put in these g functions ? " and i . grad a: , no , but what i the so you could have four different bayes - nets that you 're running , and then run your own write your own function that would take the output of those four , and make your own " g function " , is what i was saying . professor b: , that 's fine if it 's if it comes only at the end . but suppose you want it embedded ? grad a: , then you 'd have to break all of your bayes - nets into smaller bayes - nets , with all the professor b: , , you bet . but , at that point you may say , " hey , java bayes is n't the only package in town . let 's see if there 's another package that 's , more civilized about this . " now , srini is worth talking to on this , cuz he said that he actually did hack some combining functions into but he does n't remember at least when i talked to him , he did n't remember whether it was an e an easy thing , a natural thing , or whether he had to do some violence to it to make it work . grad d: i do n't see why the , combining f functions have to be directly hacked into , they 're used to create tables so we can just make our own little functions that create tables in xml . professor b: , i say that 's one way to do it , is to just convert it int into a c p t that you zip it 's blown up , and is a it 's , it 's huge , but it does n't require any data fitting or complication . grad d: i do n't think , the fact that it blown u blows up is a huge issue in the sense that so say it blows up , right ? so there 's , like , the , ten , f ten , fifteen , things . it 's gon na be like , two to the that , which is n't so bad . professor b: i understand . i ' m just saying tha that w that was wi that was my note . the little note i sent said that . it said , " here 's the way you 'd take the logical f g function and turn it into a cpt . " that the max - the evidence - combining function . so we could do that . and maybe that 's what we 'll do . so , i will , e before next week , @ p push some more on this that dekai wu did , and try to understand it . , you 'll make a couple of more copies of the heckerman paper to give to people ? grad c: ok . and i 'll think s through this , getting eva vectors dynamically out of ontologies one more time because i s i ' m not quite whether we all think of the same thing or not , here . professor b: alright , great ! and , robert , for coming in under he he 's been sick , robert . grad a: i was thinking maybe we should just cough into the microphone and see if they ca n't th see if they can handle it .
the data collection running in parallel with the project can start shortly with recruiting subjects. meanwhile , the german parser now works with english sentences. the parser's output modifies the xml used by the system to initiate actions and generate responses. the xml for map requests also comprise a route , route elements and points of interest along the way. it is at this level that enter/vista/approach tags will be added as action modes. as the project evolves , further enrichment of the ontology ( actions , linguistic features ) will be necessary. similarly , object representations will include an eva vector. this can be incorporated in the database entry for a particular building or inherited from the ontology of the building type. these elements will constitute only a small part of the inputs of the bayes-net that determines the action mode. the actual number of the inputs can create a combinatorial explosion when setting the probabilities. noisy-or's can help avoid this by simplifying the probability tables and applying a deterministic function to produce their complete version. in any case , further to fulfilling the basic requirements ( translating the parser and the generator into english ) , the project is entirely open-ended in terms of focus of research. as the data collection is about to begin , there are some minor changes to be done in the design of the experiment , the script and the permission forms. subjects can be recruited either from within the university or through other social circles. as to the system design , the next step is the translation of the generator into english. moreover , it is important to test the system and its internal workings by adding new sentence types and modifying the parser. all further research will use the existing domain ( "tourists in heidelberg" ) , as this provides enough diversity for the purposes of the project. the german partners for the project will realise all the necessary changes in the ontology. it is therefore preferable for the group to exercise foresight and agree on the set of new tags they will need in the long run , so that they limit the number of change requests. finally , on a more technical note , noisy-or's were discussed and considered a sensible approach to deal with the potential problems with the setting the conditional probabilities of the bayes-nets. although the parser has been modified to work with english , the details of its internal workings ( calling functions , setting discourse variables , generating actions ) are not yet clear. understanding the parsed data is helped by the database of objects , people and events accompanying the system , but the mapping of referring expressions to database objects can still be a hurdle. on a different level , the bayes-net used to generate the different action modes can easily become unmanageable as the number of features to be taken into account increases. this can be tackled with the use of the noisy-or technique. the deterministic functions this requires cannot be introduced directly into javabayes , although some runaround ways can be implemented. a final , high-level issue , that has not been dealt with yet , is the definition of the constructions and the construction grammar framework analysis behind the whole enterprise. the preparation for the data collection is almost finished and expected to start experiments within a couple of weeks. there is some additional tv and cinema data currently being translated from german. the german parser has been translated and it can now be used for a range of sentence types in english. on the other hand , the translation of some parts of the relational database accompanying the system has also been commissioned. eml have provided the structure for map requests , the basic representation of the navigational goals upon which further action modes are going to be built. the same people are also creating a general , top-level xml object ontology that will include all types of buildings.
###dialogue: professor b: ok . ami , do yours then we 'll open it and it 'll be enough . grad a: mmm does n't , it should be the other way . , now it 's on . professor b: alright . anyway . so , before we get started with the , technical part , want to review what is happening with the our data collection . professor b: so , probably after today , that should n't come up in this meeting . th - this is s should be i m it is n't there 's another thing going on of gathering data , and that 's independent of this . but , want to make we 're all together on this . what we think is gon na happen is that , in parallel starting about now we 're gon na get fey to , where you 're working with me and robert , draft a note that we 're gon na send out to various cogsci c and other classes saying , " here 's an opportunity to be a subject . contact fey . " and then there 'll be a certain number of , hours during the week which she will be available and we 'll bring in people . , roughly how many , robert ? we d do we know ? professor b: ok . so , we 're looking for a total of fifty people , not necessarily by any means all students but we 'll s we 'll start with that . in parallel with that , we 're gon na need to actually do the script . and , so , i there 's a plan to have a meeting friday afternoon , with , jane , and maybe liz and whoever , on actually getting the script worked out . but what i 'd like to do , if it 's o k , is to s to , as i say , start the recruiting in parallel and possibly start running subjects next week . the week after that 's spring break , and maybe we 'll look for them some subjects next door or i grad c: also , f both fey and i will , do something of which i may , kindly ask you to do the same thing , which is we gon na check out our social infrastructures for possible subjects . meaning , kid children 's gymnastic classes , pre - school parents and . they also sometimes have flexible schedules . so , if you happen to be in a non - student social setting , and people who may be interested in being subjects we also considered using the berkeley high school and their teachers , maybe , and get them interested in . grad c: but i will just make a first draft of the , note , the " write - up " note , send it to you and fey and then grad c: and , are we have we concurred that , these forms are sufficient for us , and necessary ? professor b: , . there 's one tricky part about , they have the right i the last paragraph " if you agree to participate you have the opportunity to have anything excised which you would prefer not to have included in the data set . " ok ? now that , we had to be included for this other one which might have , meetings , about something . professor b: in this case , it does n't really make sense . , so what i 'd like to do is also have our subjects sign a waiver saying " i do n't want to see the final transcript " . and if they do n't if they say " no , i ' m not willing to sign that " , then we 'll show them the final transcript . but , . professor b: that , so we might actually , s i jane may say that , " , you ca n't do this " , " on the same form , we need a separate form . " but anyway . i 'd like to , e , add an a little thi a thing for them to initial , saying " nah , do i do n't want to see the final transcript . " but other than that , that 's one 's been approved , this really is the same project , rec and . so we just go with it . grad c: ok . so much for the data , except that with munich everything is fine now . they 're gon na transcribe . they 're also gon na translate the , german data from the tv and cinema for andreas . they 're they all seem to be happy now , with that . w c sh should we move on to the technical sides ? i the good news of last week was the parser . so , bhaskara and i started working on the parser . then bhaskara went to class and once he came back , it was finished . it , i did n't measure it , but it was about an hour and ten minutes . and , and now it 's we have a complete english parser that does everything the german parser does . grad e: what did you end up having to do ? , wha was there anything interesting about it ? grad c: , w we d the first we did is we tried to do change the " laufen " into " run " , or " running " , or " runs " . and we noticed that whatever we tried to do , it no effect . grad c: and , the reason was that the parser i c completely ignores the verb . so this sentence is parses the p the same output , grad c: that 's what you need . if if you 'd add today and evening , it 'll add time or not . grad c: so it i it does look at that . but all the rest is p simply frosting on the cake , and it 's optional for that parser . professor b: so , you can sho you you are are you gon na show us the little templates ? grad c: . we ar we can sh er show you the templates . i also have it running here , grad c: so if i do this now , you can see that it parsed the wonderful english sentence , " which films are on the cinema today evening ? " but , . grad c: it could be " this evening , which films are on the cinema " , or " running in the cinema , which " , " today evening " , i " is anything happening in the cinema this evening ? " professor b: actually , it 's a little tricky , in that there 's some allowable german orders which are n't allowable english orders and . and it is order - based . so it is n't it ? grad c: , these sentences are just silly . , d these were not the ones we actually did it . what 's an idiomatic of phrasing this ? which films are showing ? grad d: you want to get it ? or is di was it easy to get it ? grad c: ok . so . wonderful parse , same thing . except that we d w we do n't have this , time information here now , which is , . this are the reserve . anyways . so . these are the ten different sentence types that the parser was able to do . and it still is , now in english . and , you have already to make it a little bit more elaborate , right ? grad d: , i changed those sentences to make it , more , idiomatic . and , you can have i many variations in those sentences , they will still parse fine . so , in a sense it 's pretty broad . grad c: so , if you want to look at the templates , they 're conveniently located in a file , " template " . , and this is what i had to do . i had to change , @ " spielfilm " to " film " , " film " to " movie " , cinem " kino " to " cinema " to " today " heu " heute " to " today " , evening " abend " to " evening " grad d: one thing i was wondering , was , those functions there , are those things that modify the m - three - l ? ok . grad c: but we 'll get to that in a second . and so this means , this and " see " are not optional . want i like is all maybe in there , but may also not be in there . professor b: so so , if it says " this " and " see " , it also will work in " see " and " this " ? in the other order ? with those two key words ? grad c: action watch , whatever . nothing was specialfi specified . except that it has some references to audio - visual media here . grad c: where it gets that from it 's correct , but i where it gets it from . grad d: one thing i was wondering was , those percentage signs , right ? so , why do we even have them ? because if you did n't have them grad c: , i 'll tell you why . because it gives a you a score . and the value of the score is , v i assume , i , the more of these optional things that are actually in there , the higher the r score it is . grad c: so we should n't belittle it too much . it 's doing something , some things , and it 's very flexible . i ' ve just tried to be . grad c: ok . , let 's hope that the generation will not be more difficult , even though the generator is a little bit more complex . but we 'll mmm , that means we may need two hours and twenty minutes rather than an hour ten minutes , i hope . grad c: and the next thing i would like to be able to do , and it seems like this would not be too difficult either , is to say , " ok let 's now pretend we actually wanted to not only change the mapping of , words to the m - three - l but we also wanted to change add a new sentence type and make up some new m - three - l s " professor b: so that 'd be great . it would be a good exercise to just see whether one can get that to run . grad d: fine , . , so where are those functions " action " , " goodbye " , and so on , right ? are they actually , are they going to be called ? , are they present in the code for the parser ? grad c: . what it does , it i it does something fancy . it loads it has these style sheets and also the , schemata . so what it probably does , is it takes the , is this where it is ? this is already the xml ? this is where it takes its own , syntax , and converts it somehow . where is the grad c: , where it actually produces the xml out of the , parsed . no , this is not it . i ca n't find it now . you mean , where the act how the action " goodbye " maps into something grad c: this is what happens . this is what you would need to change to get the , xml changed . so when it encounts encounters " day " , it will , activate those h classes in the xml but , i saw those actions , the " goodbye " somewhere . , . grad d: mmm . m - three - l dot dtd ? that 's just a specification for the xml format . grad c: , we 'll find that out . so whatever n this does this is , looks l to me like a function call , right ? grad c: so , whenever it encounters " goodbye " , which we can make it do in a second , here grad d: each of those functions act on the current xml structure , and change it in some way , by adding a l a field to it , . professor b: y . they also seem to affect state , some of them there were other actions , that s seemed to step state variables somewhere , like the n s " discourse status confirm " . ok . so that 's going to be a call on the discourse and confirm that it 's grad c: and so whenever say , " write " , it will put this in here . professor b: , so it always just is it so it , go back , then , cuz it may be that all those th things , while they look like function calls , are just a way of adding exactly that to the xml . grad c: , we 'll see , when we say , let 's test something , goodbye , causes it to c to create an " action goodbye - end - action " . which is a means of telling the system to shut down . now , if we know that " write " produces a " feature discourse - status confirm discourse - status " . so if i now say " write , goodbye , " it should do that . it sho it creates this , grad d: right there . but there is some function call , because how does it know to put goodbye in content , but , confirm in features ? professor b: good point . it 's it 's the it 's under what sub - type you 're doing it . grad a: , it just automatically initializes things that are common , right ? so it 's just a shorthand . grad c: , this is german . so , now , this , it can not do anymore . nothing comes out of here . grad c: so , it does n't speak german anymore , but it does speak english . and there is , here , a reference so , this tells us that whatever is has the id " zero " is referenced here by @ the restriction seed and this is exa " i want " what was the sentence ? grad c: need two seats here . nuh . and where is it playing ? there should also be a reference to something , maybe . our d this is re here , we change and so , we here we add something to the discourse - status , that the user wants to change something that was done before and that , whatever is being changed has something to do with the cinema . grad a: so then , whatever takes this m - three - l is what actually changes the state , not the , ok . professor b: no , right , the discourse maintainer , i see . and it and it runs around looking for discourse status tags , and doing whatever it does with them . and other people ignore those tags . so , . i definitely think it 's it 's worth the exercise of trying to actually add something that is n't there . professor b: , a kid understanding what 's going on . then the next thing we talked about is actually , figuring out how to add our own tags , and like that . grad c: point number two . i got the , m - three - l for the routes today . , so i got some more . this is the , interesting . it 's just going up , it 's not going back down . so , this is , what i got today is the new m - three - l for , the maps , and with some examples so , this is the xml and this is what it will look like later on , even though it you ca n't see it on this resolution . and this is what it is the structure of map requests , also not very interesting , and here is the more interesting for us , is the routes , route elements , and , again , as we thought it 's really simple . this is the , , parameters . we have @ simple " from objects " and " to objects " and , points of interest along the way i asked them whether or not we could , first of all , i was little bit it seemed to me that this m way of doing it is a stack a step backwards from the way we ' ve done it before . t it seems to me that some notions were missing . professor b: so this is not a complicated negotiation . there 's there 's not seven committees , or anything , right ? professor b: great . so this is just trying to it 's a design thing , not a political thing . once we ' ve we can just agree on what oughta be done . good . grad c: exactly . however , the , e so that you understand , it is really simple . you you have a route , and you cut it up in different pieces . and every element of that e r f of that every segment we call a " route element " . and so , from a to b we cut up in three different steps , and every step has a " from object " where you start , a " to object " where y where you end , and some points of interest along the way . what w i was missing here , and , maybe it was just me being too stupid , is , i did n't get the notion of the global goal of the whole route . really , s was not straightforward visibly for me . and some other . and i suggested that they should n be k , kind enough to do s two things for us , is one , also allocating , some tags for our action schema enter - vista - approach , and and also , since you had suggested that , we figure out if we ever , for a demo reason , wanted to shortcut directly to the g gis and the planner , of how we can do it . now , what 's the state of the art of getting to entrances , what 's the syntax for that , how get getting to vista points and calculating those on the spot . and the approach mode , anyhow , is the default . that 's all they do it these days . wherever you 'll find a route planner it n does nothing but get to the closest point where the street network is at minimal distance to the geometric center . professor b: - . so , let now , this is important . let , i want a again , outside of m almost managerial point , you 're in the midst of this , so better . but it seems to me it 's probably a good idea to li minimize the number of , change requests we make of them . so it seemed to me , what we ought to do is get our story together . and think about it some , internally , before asking them to make changes . does this does this make sense to you guys ? it you 're doing the interaction but it seemed to me that what we ought to do is come up with a , something where you , and i who 's mok working most closely on it . probably johno . , take what they have , send it to everybody saying " this is what they have , this is what we think we should add " , and then have a d a an iteration within our group saying " , " and get our best idea of what we should add . and then go back to them . is i or , i does this make sense to you ? or grad c: . especially if we want , what i my feeling was we reserved something that has a r an ok label . that 's th that was my th first step . i w no matter how we want to call it , this is our playground . and if we get something in there that is a structure elaborate and complex enough to maybe enable a whole simulation , one of these days , that would be u the perfect goal . professor b: that 's right . so , . the problem is n't the short ra range optimization . it 's the o one or two year thing . ok . what are the thl class of things we think we might try to do in a year or two ? how how would we try to characterize those and what do we want to request now that 's leave enough space to do all that ? and that re that requires some thought . and so that sounds like a great thing to do as the priority item , as soon as we can do it . so y so you guys will send to the rest of us a version of , this , and the , description professor b: tu , tur change the description to , english and , then then , . then , with some sug s suggestions about where do we go from here ? , this and this , was just the action end . , at some point we 're going to have to worry about the language end . but for the moment just , t for this class of things , we might want to try to encompass . and grad a: then the scope of this is beyond approach and vis - or vista . , . professor b: , yeah . this is this is everything that , , we might want to do in the next couple years . grad a: but i ' m just but the so this xml here just has to do with source - path - goal type , in terms of traveling through heidelberg . or travel , specifically . so , but this o is the domain greater than that ? professor b: no . i think the i the idea is that . it 's beyond source - path - goal , but we do n't need to get beyond it @ tourists in heidelberg . it seems to me we can get all the complexity we want in actions and in language without going outside of tourists in heidelberg . but , i depending on what people are interested in , one could have , tours , one could have , explanations of why something is , why was this done , or , no there 's no end to the complexity you can build into the , what a tourist in heidelberg might ask . so , at least unless somebody else wants t to suggest otherwise the general domain we do n't have t to , broaden . that is , tourists in heidelberg . and if there 's something somebody comes up with that ca n't be done that way , then , . w we 'll look at that , but i 'd be s i 'd be surprised at if there 's any important issue that and , if you want to , push us into reference problems , that would be great . professor b: ok , so this is his specialty is reference , and , what are these things referring to ? not only anaphora , but , more generally the , this whole issue of , referring expressions , and , what is it that they 're actually dealing with in the world ? and , again , this is li in the databa this is also pretty formed because there is an ontology , and the database , and . so it is n't like , , the evening star or like that . i i it all the entities do have concrete reference . although th the to get at them from a language may not be trivial . there are n't really deep mysteries about , what w what things the system knows about . professor b: you have proper names , and descriptions . and a l and a lot and anaphora , and pronouns , grad c: now , we hav the whole unfortunately , the whole database is , in german . we have just commissioned someone to translate some bits of it , ie the e the shortest k the more general descriptions of all the objects and , persons and events . so , it 's a relational database with persons , events , and , objects . and it 's quite , there . but did y i there will be great because the reference problem really is not trivial , even if you have such a g - defined world . grad a: could you give me an example of a reference problem ? so l make it more concrete ? grad c: how do i get to the powder - tower ? we t think that our bit in this problem is interesting , but , just to get from powder - tower to an object i id in a database is also not really trivial . phd f: or or if you take something even more scary , " how do i get to the third building after the tower ? the ple - powder - tower ? " , you need some mechanism for professor b: or you can say " how " , " how do i get back ? " and , again , it 's just a question of which of these things , people want to dive into . what , i ' m gon na try to do , and i , pwww ! let 's say that by the end of spring break , i 'll try to come up with some general story about , construction grammar , and what constructions we 'd use and how all this might fit together . there 's this whole framework problem that i ' m feeling really uncomfortable about . and i have n't had a chance to think about it . but i want to do that early , rather than late . and you and i will probably have to talk about this some . grad c: u that 's what strikes me , that we the de g , small something , maybe we should address one of these days , is to that most of the work people actually always do is look at some statements , and analyze those . whether it 's abstracts or newspapers and like this . but the whole i is it really relevant that we are dealing mostly with , questions ? grad c: and this is it seems to me that we should maybe at least spend a session or brainstorm a little bit about whether that l this is special case in that sense . i . did we ever find m metaphorical use in questions in that sense , really ? professor b: , we could take all the standard metaphor examples and make question versions of them . professor b: , or , . " wh - why is he pushing for promotion ? " or , " who 's pushing proof " er , just pick any of them and just do the so i do n't think , it 's difficult , to convert them to question forms that really exist and people say all the time , and we how to handle them , too . right ? , it 's i d it we how to handle the declarative forms , @ really , and , then , the interrogative forms , - . grad e: . it 's just that the goals are g very different to cases so we had this problem last year when we first thought about this domain , actually , was that most of the things we talked about are our story understanding . , we 're gon na have a short discourse and the person talking is trying to , i , give you a statement and tell you something . and here , it 's th grad e: and then here , y you are j , the person is getting information and they or may not be following some larger plan , that we have to recognize or , infer . and th the their discourse patterns probably { nonvocalsound } do n't follo follow quite as many logical connec professor b: right . no , that 's one of things that 's interesting , is in this over - arching story we worked it out for th as you say , this the storytelling scenario . and it 's really worth thinking through what it looks like . what is the simspec mean , et cetera . grad e: m right . cuz for a while we were thinking , " , how can we change the , data to illicit tha illicit , actions that are more like what we are used to ? " but we would rather , try to figure out what 's , professor b: , i . , maybe that 's what we 'll do is s u e we can do anything we want with it . , once we have fulfilled these requirements , professor b: ok , and the one for next , summer is just half done and then the other half is this , " generation thing " which we think is n't much different . so once that 's done , then all the rest of it is , , what we want to do for the research . and we can w we can do all sorts of things that do n't fit into their framework . th - there 's no reason why we 're c we 're constrained to do that . if we can use all the , execution engines , then we can , really { nonvocalsound } try things that would be too much pain to do ourselves . but there 's no obligation on any of this . so , if we want to turn it into u understan standing stories about heidelberg , we can do that . , that would just be a t a grad c: or , we need and if we ' r take a ten year perspective , we need to do that , because w e w a assuming we have this , we ta in that case we actually do have these wonderful stories , and historical anecdotes , and knights jumping out of windows , grad c: and - and tons of . so , th the database is huge , and if we want to answer a question on that , we actually have to go one step before that , and understand that . in order to e do sensible information extraction . grad c: and so , this has been a deep map research issue that was is part of the unresolved , and to - do 's , and something for the future , is how can we run our text , our content , through a machine that will enable us , later , to retrieve or answer e questions more sensibly ? phd f: so , i was just going to ask , so , what is the basic thing that you are , obligated to do , , by the summer before w y c we can move professor b: so , what happened is , there 's this , robert was describing the there 's two packages there 's a , quote parser , there 's a particular piece of this big system , which , in german , takes these t sentence templates and produces xml structures . and one of our jobs was to make the english equivalent of that . that , these guys did in a day . the other thing is , at the other end , roughly at the same level , there 's something that takes , x m l structures , produces an output xml structure which is instructions for the generator . and then there 's a language generator , and then after that a s a synthesizer that goes from an xml structure to , language generation , to actual specifications for a synthesizer . , but again , there 's one module in which there 's one piece that we have to convert to english . professor b: is that and that but as i say , this is all along was viewed as a m a minor thing , necessary , but not and much more interesting is the fact that , as part of doing this , we are , inheriting this system that does all these other things . professor b: not precisely what we want , and that 's wh where it gets difficult . and i do n't pretend to understand yet what we really ought to do . grad c: so , e enough of that , but i , , mmm , the e , johno and i will take up that responsibility , and , get a first draft of that . now , we have just , two more short things . , y you guys started fighting , on the bayes - net " noisy - or " front ? grad d: , i should , talk a little bit about that , because that might be a good , architecture to have , in general for , problems with , multiple inputs to a node . professor b: good ! ok . and what 's the other one ? so that just we the d agenda is ? professor b: i ' ve got a couple new wu papers as . , so i ' ve been in contact with wu , so , probably let 's put that off till i understand better , what he 's doing . it 's just a little embarrassing all this was in his thesis and i was on his thesis committee , and , so , i r really knew this at one time . professor b: but , i it 's not only is part of what i have n't figured out yet is how all this goes together . so i 'll dig up some more from dekai . and so why do n't we just do the , grad d: so , recall that , we want to have this structure in our bayes - nets . namely , that , you have these nodes that have several bands , right ? does , they the typical example is that , these are all a bunch of cues for something , and this is a certain effect that we 'd like to conclude . so , like , let 's just look at the case when , this is actually the final action , right ? so this is like , , touch , grad d: , e - eva , right ? enter , v view , approach , right ? grad d: and say , i , it could be , like this is n't the way it really is , but let me say that , suppose someone mentioned , admission fees , it takes too long . try let me just say " landmark " . if a landmark , then there 's another thing that says if it 's closed or not , at the moment . alright , so you have nodes . right ? and the , problem that we were having was that , given n - nodes , there 's " two to the n " given n - nodes , and furthermore , the fact that there 's three things here , we need to specify " three times " , " two to the n " probabilities . that 's assuming these are all binary , which f they may not be . , they could be " time of day " , in which case we could , say , " morning , afternoon , evening , night " . so , this could be more so , it 's a lot , anyway . and , that 's a lot of probabilities to put here , which is a pain . so noisy - ors are a way to , deal with this . where should i put this ? so , the idea is that , let 's call these , c - one , c - two , c - three , and c - four , and e , for and effect , i . the idea is to have these intermediate nodes . , actually , the idea , first of all , is that each of these things has a quote - unquote distinguished state , which means that this is the state in which we do n't really know anything about it . right ? so , if we do n't really know if a landmark or not , or , i if that just does n't seem relevant , then that would be th the disting - the distinguish state . it 's a really , if there is something for the person talking about the admission fee , if they did n't talk about it , that would be the distinguish state . grad d: , . that 's just what they the word they used in that paper . so , the idea is that , you have these intermediate nodes , right ? e - one , e - two , e - three and e - four ? grad d: so the idea is that , each of these ei is represents what this would be if all the other ones were in the distinguish state . right ? so , suppose that the person , suppose the thing that they talked about is a landmark . but none of the other cues really apply . then , this would be w the this would just represent the probability distribution of this , assuming that this cue is turned on and the other ones just did n't apply ? so , if it is a landmark , and no none of the other things really ap applicable , then this would represent the probability distribution . so maybe in this case maybe we just t k maybe we decide that , if the thing 's a landmark and we anything else , then we 're gon na conclude that , they want to view it with probability , point four . they want to enter it with probability , with probability point five and they want to approach it probability point one , say so we come up with these l little tables for each of those and the final thing is that , this is a deterministic function of these , so we do n't need to specify any probabilities . we just have to , say what function this is , right ? so we can let this be , g of e - one comma e - two . e - three , e - four . right ? and our example g would be , a majority vote ? professor b: . ok , so th so the important point is w not what the g function is . the important point is that there is a general idea of shortcutting the full cpt . th - c the full conditional probability table with some function . which y w you choose appropriately for each case . so , depending on what your situation is , there are different functions which are most appropriate . so i gave bhaskara a copy of this , " ninety - two " paper . d and you got one , robert . i who else has seen it . professor b: it 's short . so , i u w , yo you have you read it yet ? professor b: ok , so you should take a look . nancy , i ' m you read it at some point in life . professor b: anyway . so the paper is n't th is n't real hard . one of the questions just come at bhaskara is , " how much of this does javabayes support ? " grad d: , it 's a good question . { nonvocalsound } the so what we want , is javabayes to support deterministic , functions . and , in a sense it sup we can make it supported by , manually , entering , probabilities that are one and zeros , right ? professor b: right . so the little handout that the little thing that i sent a message saying , here is a way to take one thing you could do , which is s in a way , stupid , is take this deterministic function , and use it to build the cpt . so , if ba - javabayes wo n't do it for you , that you can convert all that into what the cpt would be . and , what i sent out about a week ago , was an idea of how to do that , for , evidence combination . so one of one function that you could use as your " g function " is an e evidence - combining . so you just take the , if each of th if each of the ones has its own little table like that , then you could take the , strength of each of those , times its little table , and you 'd add up the total evidence for " v " , " e " , and " a " . grad d: i do n't think you can do this , because g is a function from that to that . so there 's no numbers . there 's just quadruplets of , n - duplets of , e vs . professor b: i i no , no but i ' m saying is there is a w , if y if you decide what 's what is appropriate , is probablistic evidence combination , you can write a function that does it . it 's a pui it 's actually one of the examples he 's got in there . but , anyway , s skipping the question of exactly which functions now is it clear that you might like to be able to shortcut the whole conditional probability table . grad c: , in some it seems very plausible in some sense , where we will be likely to not be observe some of the . cuz we do n't have the a access to the information . professor b: that 's one of the problems , is , w is is , where would th where would it all come from ? grad c: i if it 's a discar discourse initial phrase , we will have nothing in the discourse history . so , if we ever want to wonder what was mention grad d: . a are you saying that we 'll not be able to observe certain nodes ? that 's fine . that is orthogonal thing . professor b: , so there 's two separate things , robert . the f the bayes - nets in general are quite good at saying , " if you have no current information about this variable just take the prior for that . " ok ? th - that 's what they 're real good at . so , if you do n't have any information about the discourse , you just use your priors of whatever the discourse , whatever w it 's probabilistically , whatever it would be . and it 's not a great estimate , but it 's the best one you have , and , . so that , they 're good at . but the other problem is , how do you fill in all these numbers ? and that 's the one he was getting at . grad d: so , specifically in this case you have to f have this many numbers , whereas in this case you just have to have three for this , three for this . right ? so you have to have just three n ? so , this is much smaller than that . grad e: so , you do n't need da data enough to cover , nearly as much . grad a: so , really , i what a a noisy - or seems to " neural - net - acize " these bayes - nets ? professor b: to some so , " noisy - or " is a funny way of referring to this , because the noisy - or is only one instance . professor b: that one actually is n't a noisy - or . so we 'll have to think of a way t grad a: , my point was more that we just with the neural net , right , things come in , you have a function that combines them professor b: , it tha - that 's true . it is a is also more neural - net - like , although , it is n't necessarily sum , s , sum of weights or anything like that . professor b: i you could have , like the noisy - or function , really is one that 's essentially says , take the max . professor b: but anyway . and , i thi that 's the standard way people get around the there are a couple other ones . there are ways of breaking this up into s to subnets and like that . but , the we definitely it 's a great idea tha to pursue that . grad c: wha - still leaves one question . it you can always see easily that i ' m not grasping everything correctly , but what seemed attractive to me in i m in the last discussion we had , was that we find out a means of getting these point four , point five , point one , of c - four , not because , a is a landmark or not , but we label this whatever object type , and if it 's a garden , it 's point three , point four , point two . if it 's a castle , it 's point eight , point one . if it 's , a town hall , it 's point two , point three , point five . and we do n't want to write this down necessarily every time for something but , let 's see . grad d: it 'll be students where else would it be stored ? that 's the question . grad c: , in the beginning , we 'll write up a flat file . we know we have twenty object types and we 'll write it down in a flat file . professor b: so , i is , let me say something , guys , cuz there 's not there 's a pretty point about this we might as get in right now . which is the hierarchy that s comes with the ontology is just what you want for this . so that , if about it let 's say , a particular town hall that , it 's one that is a monument , then , that would be stored there . if you do n't , you look up the hierarchy , so , you may or so , then you 'd have this little vector of , , approach mode or eva mode . let 's ok , so we have the eva vector for various kinds of landmarks . if it for a specific landmark you put it there . if you do n't , you just go up the hierarchy to the first place you find one . professor b: , or , link to but in any case i view it logically as being in the ontology . it 's part of what about a an object , is its eva vector . and , if yo as i say , if about a specific object , you put it there . this is part of what dekai was doing . so , when we get to wu , the - e we 'll see w what he says about that . and , then if you if it is n't there , it 's higher , and if you anything except that it 's a b it 's a building , then up at the highest thing , you have the pr what amounts to a prior . if you anything else about a building , you just take whatever your crude approximation is up at that level , which might be equal , or whatever it is . so , that 's a very pretty relationship between these local vectors and the ontology . and it seems to me the obvious thing to do , unless we find a reason to do something different . does this make sense to you ? bhask - ? grad d: so , we are but we 're not doing the ontology , so we have to get to whoever is doing the u ultimately , professor b: so , that 's another thing we 're gon na need to do , is , to , either professor b: we 're gon na need some way to either get a p tag in the ontology , or add fields , or some way to associate or , w it may be that all we can do is , some of our own hash tables that it th - the th , there 's always a way to do that . it 's a just a question of grad c: but it 's , it strikes me as a what for if we get the mechanism , that will be the wonderful part . and then , how to make it work is the second part , in the sense that , m the guy who was doing the ontology , s ap apologized that i it will take him another through two to three days because they 're having really trouble getting the upper level straight , right now . the reason is , given the craw bet , the projects that all carry their own taxonomy and , on all history , they 're really trying to build one top level ontology ft that covers all the eml projects , and that 's , a tough cookie , a little bit tougher than they figured . i could have told them s so . but , nevertheless , it 's going to be there by n by , next monday and i will show you what 's what some examples from that for towers , and . what i do n't think is ever going to be in the ontology , is , the likelihood of , people entering r town halls , and looking at town halls , and approaching town halls , especially since we are b dealing with a case - based , not an instance - based ontology . so , there will be nothing on that town hall , or on the berkeley town hall , or on the heidelberg town hall , it 'll just be information on town halls . grad c: , that 's hhh . that 's that 's al different question . , th the first , they had to make a design question , " do we take ontologies that have instances ? or just one that does not , that just has the types ? " and , so , since the d decision was on types , on a d simply type - based , we now have to hook it up to instances . this is grad c: , but the ontology is really not a smartkom thing , in and of itself . that 's more something that i kicked loose in eml . so it 's a completely eml thing . professor b: i understand , but is anybody doing anything about it ? it 's a political problem . we wo n't worry about it . grad c: no , but th the r i th i still think that there is enough information in there . , whether ok . so , th it will know about the twenty object types there are in the world . let 's assume there are only twenty object types in this world . and it will know if any of those have institutional meanings . so , in a sense , " i " used as institutions for some s in some sense or the other . which makes them enterable . right ? in a sense . professor b: anyway . so we may have to this is with the whole thing , we may have to build another data stru conceptually , we should be done . when we see what people have done , it may turn out that the easiest thing to do is to build a separate thing that just pools i like , i it may be , that , the instance w that we have to build our own instance , things , that , with their types , professor b: and then it goes off to the ontology once you have its type . so we build a little data structure and so what we would do in that case , is , in our instance gadget have our e v and if we d there is n't one we 'd get the type and then have the e v as for the type . so we 'd have our own little , eva tree . and then , for other , vectors that we need . so , we 'd have our own little things so that whenever we needed one , we 'd just use the ontology to get the type , and then would hash or whatever we do to say , " ! if it 's that type of thing , and we want its eva vector , pppt - pppt ! it 's that . " so , we can handle that . and then but , the combination functions , and whether we can put those in java bayes , and all that , is the bigger deal . that 's where we have to get technically clever . grad a: we could just steal the classes in javabayes and then interface to them with our own code . professor b: , it 's , e cute . , you ' ve been around enough to just ? professor b: , there 's this huge package which may or may not be consistent but , we could look at it . professor b: it 's b it 's an inter a it , it 's an interpreter and i it expects its data structures to be in a given form , and if you say , " hey , we 're gon na make a different data structure to stick in there " grad a: , no , but that just means there 's a protocol , right ? that you could professor b: it may or may not . i . that 's the question is " to what extent does it allow us to put in these g functions ? " and i . grad a: , no , but what i the so you could have four different bayes - nets that you 're running , and then run your own write your own function that would take the output of those four , and make your own " g function " , is what i was saying . professor b: , that 's fine if it 's if it comes only at the end . but suppose you want it embedded ? grad a: , then you 'd have to break all of your bayes - nets into smaller bayes - nets , with all the professor b: , , you bet . but , at that point you may say , " hey , java bayes is n't the only package in town . let 's see if there 's another package that 's , more civilized about this . " now , srini is worth talking to on this , cuz he said that he actually did hack some combining functions into but he does n't remember at least when i talked to him , he did n't remember whether it was an e an easy thing , a natural thing , or whether he had to do some violence to it to make it work . grad d: i do n't see why the , combining f functions have to be directly hacked into , they 're used to create tables so we can just make our own little functions that create tables in xml . professor b: , i say that 's one way to do it , is to just convert it int into a c p t that you zip it 's blown up , and is a it 's , it 's huge , but it does n't require any data fitting or complication . grad d: i do n't think , the fact that it blown u blows up is a huge issue in the sense that so say it blows up , right ? so there 's , like , the , ten , f ten , fifteen , things . it 's gon na be like , two to the that , which is n't so bad . professor b: i understand . i ' m just saying tha that w that was wi that was my note . the little note i sent said that . it said , " here 's the way you 'd take the logical f g function and turn it into a cpt . " that the max - the evidence - combining function . so we could do that . and maybe that 's what we 'll do . so , i will , e before next week , @ p push some more on this that dekai wu did , and try to understand it . , you 'll make a couple of more copies of the heckerman paper to give to people ? grad c: ok . and i 'll think s through this , getting eva vectors dynamically out of ontologies one more time because i s i ' m not quite whether we all think of the same thing or not , here . professor b: alright , great ! and , robert , for coming in under he he 's been sick , robert . grad a: i was thinking maybe we should just cough into the microphone and see if they ca n't th see if they can handle it . ###summary: the data collection running in parallel with the project can start shortly with recruiting subjects. meanwhile , the german parser now works with english sentences. the parser's output modifies the xml used by the system to initiate actions and generate responses. the xml for map requests also comprise a route , route elements and points of interest along the way. it is at this level that enter/vista/approach tags will be added as action modes. as the project evolves , further enrichment of the ontology ( actions , linguistic features ) will be necessary. similarly , object representations will include an eva vector. this can be incorporated in the database entry for a particular building or inherited from the ontology of the building type. these elements will constitute only a small part of the inputs of the bayes-net that determines the action mode. the actual number of the inputs can create a combinatorial explosion when setting the probabilities. noisy-or's can help avoid this by simplifying the probability tables and applying a deterministic function to produce their complete version. in any case , further to fulfilling the basic requirements ( translating the parser and the generator into english ) , the project is entirely open-ended in terms of focus of research. as the data collection is about to begin , there are some minor changes to be done in the design of the experiment , the script and the permission forms. subjects can be recruited either from within the university or through other social circles. as to the system design , the next step is the translation of the generator into english. moreover , it is important to test the system and its internal workings by adding new sentence types and modifying the parser. all further research will use the existing domain ( "tourists in heidelberg" ) , as this provides enough diversity for the purposes of the project. the german partners for the project will realise all the necessary changes in the ontology. it is therefore preferable for the group to exercise foresight and agree on the set of new tags they will need in the long run , so that they limit the number of change requests. finally , on a more technical note , noisy-or's were discussed and considered a sensible approach to deal with the potential problems with the setting the conditional probabilities of the bayes-nets. although the parser has been modified to work with english , the details of its internal workings ( calling functions , setting discourse variables , generating actions ) are not yet clear. understanding the parsed data is helped by the database of objects , people and events accompanying the system , but the mapping of referring expressions to database objects can still be a hurdle. on a different level , the bayes-net used to generate the different action modes can easily become unmanageable as the number of features to be taken into account increases. this can be tackled with the use of the noisy-or technique. the deterministic functions this requires cannot be introduced directly into javabayes , although some runaround ways can be implemented. a final , high-level issue , that has not been dealt with yet , is the definition of the constructions and the construction grammar framework analysis behind the whole enterprise. the preparation for the data collection is almost finished and expected to start experiments within a couple of weeks. there is some additional tv and cinema data currently being translated from german. the german parser has been translated and it can now be used for a range of sentence types in english. on the other hand , the translation of some parts of the relational database accompanying the system has also been commissioned. eml have provided the structure for map requests , the basic representation of the navigational goals upon which further action modes are going to be built. the same people are also creating a general , top-level xml object ontology that will include all types of buildings.
39
professor b: so , we have n't sent around the agenda . so , i , any agenda items anybody has , wants to talk about , what 's going on ? phd a: , i had a just a quick question but i know there was discussion of it at a previous meeting that i missed , but just about the wish list item of getting good quality close - talking mikes on every speaker . professor b: ok , so let 's so let 's just do agenda building right now . ok , so let 's talk about that a bit . professor b: , @ tuss close talking mikes , better quality . ok , we can talk about that . you were gon na starting to say something ? postdoc g: , you , already know about the meeting that 's coming up and i if this is appropriate for this . i . , maybe it 's something we should handle outside of the meeting . professor b: we can so we can ta so n nist is nist folks are coming by next week professor b: and , george doddington will be around as . , ok , so we can talk about that . , hear about how things are going with , the transcriptions . that 's right . professor b: that would sorta be an obvious thing to discuss . , an - anything else , strike anybody ? phd a: , we started running recognition on one conversation but it 's the r is n't working yet . so , but if anyone has phd a: , the main thing would be if anyone has , knowledge about ways to , post - process the wave forms that would give us better recognition , that would be helpful to know about . professor b: alright , that seems like a good collection of things . and we 'll undoubtedly think of other things . postdoc g: i had thought under my topic that i would mention the , four items that i , put out for being on the agenda f on that meeting , which includes like the pre - segmentation and the developments in multitrans . professor b: alright , why do n't we start off with this , u i the order we brought them up seems fine . so , better quality close talking mikes . so the one issue was that the , lapel mike , is n't as good as you would like . and so , it 'd be better if we had close talking mikes for everybody . right ? phd a: , the and actually in addition to that , that the close talking mikes are worn in such a way as to best capture the signal . and the reason here is just that for the people doing work not on microphones but on like dialogue and , or and even on prosody , which don is gon na be working on soon , it adds this extra , vari variable for each speaker to deal with when the microphones are n't similar . phd a: so and i also talked to mari this morning and she also had a strong preference for doing that . and she said that 's useful for them to know in starting to collect their data too . professor b: , one thing i was gon na say was that , i we could get more , of the head mounted microphones even beyond the number of radio channels we have because whether it 's radio or wire is probably second - order . and the main thing is having the microphone close to you , grad h: so it 's towards the corner of your mouth so that breath sounds do n't get on it . and then just about , a thumb or a thumb and a half away from your mouth . phd a: but if we could actually standardize , the microphones , as much as possible that would be really helpful . professor b: , it does n't hurt to have a few extra microphones around , so why do n't we just go out and get an order of if this microphone seems ok to people , i 'd just get a half dozen of these things . grad h: the onl the only problem with that is right now , some of the jimlets are n't working . the little the boxes under the table . and so , w , i ' ve only been able to find three jacks that are working . phd a: but y we could just record these signals separately and time align them with the start of the meeting . professor b: right now , we ' ve got , two microphones in the room , that are not quote - unquote standard . so why do n't we replace those professor b: , however many we can plug in . , if we can plug in three , let 's plug in three . professor b: also what we ' ve talked before about getting another , radio , and so then that would be , three more . professor b: so , so we should go out to our full complement of whatever we can do , but have them all be the same mike . the original reason that it was done the other way was because , it w it was an experimental thing and i do n't think anybody knew whether people would rather have more variety or , more uniformity , but @ but , sounds fine . phd a: , for short term research it 's just there 's just so much effort that would have to be done up front n , so , uniformity would be great . phd e: is it because you you 're saying the for dialogue purposes , so that means that the transcribers are having trouble with those mikes ? is that what you mean ? postdoc g: a couple times , so , , the transcribers notice and there 're some where , ugh , there 's it 's the double thing . it 's the equipment and also how it 's worn . and he 's always they just rave about how wonderful adam 's channel is . postdoc g: , but it 's not just that , it 's also you it 's also like n no breathing , no , it 's like it 's , it 's really { nonvocalsound } it makes a big difference from the transcribers ' point of view grad h: , that the point of doing the close talking mike is to get a good quality signal . we 're not doing research on close talking mikes . so we might as get it as uniform as we can . professor b: now , this is locking the barn door after the horse was stolen . we do have thirty hours , of speech , which is done this way . professor b: but but , , for future ones we can get it a bit more uniform . postdoc g: and there was some talk about , maybe the h headphones that are uncomfortable for people , to grad h: so , as i said , we 'll do a field trip and see if we can get all of the same mike that 's more comfortable than these things , which are horrible . professor b: ok , second item was the , nist visit , and what 's going on there . postdoc g: ok , so , , jonathan fiscus is coming on the second of february and i ' ve spoken with , u a lot of people here , not everyone . , and , he expressed an interest in seeing the room and in , seeing a demonstration of the modified multitrans , which i 'll mention in a second , and also , he was interested in the pre - segmentation and then he 's also interested in the transcription conventions . and , so , it seems to me in terms of like , i it wou , ok . so the room , it 's things like the audio and c and audi audio and acoustic properties of the room and how it how the recordings are done , and that thing . and , . ok , in terms of the multi - trans , that 's being modified by dave gelbart to , handle multi - channel recording . grad h: , i should ' ve i was just thinking i should have invited him to this meeting . i forgot to do it . postdoc g: that 's ok , we 'll , and it 's t and it looks really great . he he has a prototype . i , @ did n't see it , yesterday but i ' m going to see it today . and , that 's that will enable us to do , tight time marking of the beginning and ending of overlapping segments . at present it 's not possible with limitations of the , original design of the software . and . so , i . in terms of , like , pre - segmentation , that continues to be , a terrific asset to the transcribers . do you i know that you 're al also supplementing it further . do you want to mention something about that c thilo , or ? phd c: , . what what i ' m doing right now is i ' m trying to include some information about which channel , there 's some speech in . but that 's not working at the moment . i ' m just trying to do this by comparing energies , normalizing energies and comparing energies of the different channels . and so to give the transcribers some information in which channel there 's speech in addition to the thing we did now which is just , speech - nonspeech detection on the mixed file . so i ' m relying on the segmentation of the mixed file phd c: but i ' m trying to subdivide the speech portions into different portions if there is some activity in different channels . professor b: , something i did n't put in the list but , on that , same day later on in or maybe it 's no , actually it 's this week , dave gelbart and i will be , visiting with john canny who i , is a cs professor , who 's interested in ar in array microphones . professor b: and so we wanna see what commonality there is here . , maybe they 'd wanna stick an array mike here when we 're doing things professor b: but they might wanna just , , you could imagine them taking the four signals from these table mikes and trying to do something with them , i also had a discussion so , w , we 'll be over there talking with him , after class on friday . , we 'll let what goes with that . also had a completely unrelated thing . i had a , discussion today with , birger kollmeier who 's a , a german , scientist who 's got a fair sized group doing a range of things . it 's auditory related , largely for hearing aids and so on . but , he does with auditory models and he 's very interested in directionality , and location , and , head models and microphone things . and so , he 's he and possibly a student , there w there 's , a student of his who gave a talk here last year , may come here , in the fall for , a five month , sabbatical . so he might be around . get him to give some talks and so on . but anyway , he might be interested in this . phd e: that that reminds me , i had a thought of an interesting project that somebody could try to do with the data from here , either using , the mikes on the table or using signal energies from the head worn mikes , and that is to try to construct a map of where people were sitting , based on professor b: e given that , the block of wood with the two mikes on either side , if i ' m speaking , or if you 're speaking , or someone over there is speaking , it if you look at cross - correlation functions , you end up with a if someone who was on the axis between the two is talking , then you get a big peak there . and if someone 's talking on , one side or the other , it goes the other way . and then , it even looks different if th t if the two people on either side are talking than if one in the middle . it it actually looks somewhat different , so . phd e: i was just thinking , as i was sitting here next to thilo that , when he 's talking , my mike probably picks it up better than your guys 's mikes . so if you just looked at phd e: , looked at the energy on my mike and you could get an idea about who 's closest to who . professor b: , you have to the appropriate normalizations are tricky , and are probably the key . phd a: you just search for adam 's voice on each individual microphone , you know where everybody 's sitting . professor b: . we ' ve switched positions recently so you ca n't anyway . so those are just a little couple of news items . postdoc g: can i ask one thing ? , so , jonathan fiscus expressed an interest in , microphone arrays . , is there b and i also want to say , his he ca n't stay all day . he needs to , leave for , from here to make a two forty - five flight postdoc g: from oakland . so it makes the scheduling a little bit tight but do you think that , i john canny should be involved in this somehow or not . i have no idea . professor b: probably not but i 'll know better after i see him this friday what level he wants to get involved . professor b: , he might be excited to and it might be very appropriate for him to , or he might have no interest whatsoever . really . grad h: is he involved in ach ! i ' m blanking on the name of the project . nist has done a big meeting room instrumented meeting room with video and microphone arrays , and very elaborate software . is is he the one working on that ? professor b: that 's what they 're starting up . no , that 's what all this is about . they they have n't done it yet . they wanted to do it professor b: , they ' ve instrumented a room but i do n't think they have n't started recordings yet . they do n't have the t the transcription standards . they do n't have the grad h: , cuz what i had read was , they had a very large amount of software infrastructure for coordinating all this , both in terms of recording and also live room where you 're interacting the participants are interacting with the computer , and with the video , and lots of other . professor b: , i ' m not . all all i know is that they ' ve been talking to me about a project that they 're going to start up recording people meet in meetings . and , it is related to ours . they were interested in ours . they wanted to get some uniformity with us , about the transcriptions and so on . professor b: and one notable difference u actually i ca n't remember whether they were going to routinely collect video or not , but one , difference from the audio side was that they are interested in using array mikes . so , , i 'll just tell you the party line on that . the reason i did n't go for that here was because , the focus , both of my interest and of adam 's interest was , in impromptu situations . and we 're not recording a bunch of impromptu situations but that 's because it 's different to get data for research than to actually apply it . and so , for scientific reasons we thought it was good to instrument this room as we wanted it . but the thing we ultimately wanted to aim at was a situation where you were talking with , one or more other people i , in an p impromptu way , where you did n't actually the situation was going to be . and therefore it would not it 'd be highly unlikely that room would be outfitted with some very carefully designed array of microphones . , so it was only for that reason . it was just , yet another piece of research and it seemed like we had enough troubles just professor b: no . so there 's , there 's a whole range of things there 's a whole array of things , that people do on this . so , the big arrays , places , like , rutgers , and brown , and other places , they have , big arrays with , i , a hundred mikes . professor b: and so there 's a wall of mikes . and you get really , really good beam - forming with that thing . and it 's and , at one point we had a proposal in with rutgers where we were gon na do some of the per channel signal - processing and they were gon na do the multi - channel , but it d we ended up not doing it . phd e: i ' ve seen demonstrations of the microphone arrays . it 's amazing how they can cut out noise . professor b: , our block of wood is unique . but the no , there are these commercial things now you can buy that have four mikes and , so , there 's a range of things that people do . , so if we connected up with somebody who was interested in doing that thing that 's a good thing to do . , whenever i ' ve described this to other people who are interested on the with the acoustic side that 's invariably the question they ask . just like someone who is interested in the general dialogue thing will always ask " , are you recording video ? " professor b: and and the acoustic people will always say , " are you doing , array microphones ? " so it 's a good thing to do , but it does n't solve the problem of how do you solve things when there 's one mike or at best two mikes in this imagined pda that we have . so maybe we 'll do some more of it . postdoc g: one thing , i . , i know that having an array of , i would imagine it would be more expensive to have a an array of microphones . but could n't you approximate the natural sis situation by just shutting off , channels when you 're later on ? , it seems like if the microphones do n't effect each other then could n't you just , record them with an array and then just not use all the data ? grad h: it 's it 's just a lot of infrastructure that for our particular purpose we felt we did n't need to set up . professor b: , if ninety - nine percent of what you 're doing is c is shutting off most of the mikes , then going through the but if you get somebody who 's who has that as a primary interest then that put then that drives it in that direction . grad h: that 's right , if someone came in and said we really want to do it , we do n't care . that would be fine , phd e: so to save that data you you have to have one channel recording per mike in the array ? professor b: there 's i what they 're going to do and i how big their array is . if you were gon na save all of those channels for later research you 'd use up a lot of space . and , th grad h: their software infrastructure had a very elaborate design for plugging in filters , and mixers , and all sorts of processing . so that they can do in real time and not save out each channel individually . professor b: but , for optimum flexibility later you 'd want to save each channel . but in practical situations you would have some engine of some sort doing some processing to reduce this to some to the equivalent of a single microphone that was very directional . phd a: it seems to me that there 's , there are good political reasons for doing this , just getting the data , because there 's a number of sites like right now sri is probably gon na invest a lot of internal funding into recording meetings also , which is good , but they 'll be recording with video and they 'll be , it 'd be if we can have at least , make use of the data that we 're recording as we go since it 's this is the first site that has really collected these really impromptu meetings , and just have this other information available . so , if we can get the investment in just for the infra infrastructure and then , i , save it out or have whoever 's interested save that data out , transfer it there , it 'd be g it 'd be good to have the recording . phd a: i ' m not about video . that 's an video has a little different nature since right n right now we 're all being recorded but we 're not being taped . , but it definitely in the case of microphone arrays , since if there was a community interested in this , then grad h: , but we need a researcher here who 's interested in it . to push it along . professor b: see the problem is it took , it took at least six months for dan to get together the hardware and the software , and debug in the microphones , and in the boxes . and it was a really big deal . and so we could get a microphone array in here pretty easily and , have it mixed to one channel of some sort . but , e for , how we 're gon na decide for for maximum flexibility later you really do n't want to end up with just one channel that 's pointed in the direction of the p the person with the maximum energy like that . , you want actually to have multiple channels being recorded so that you can and to do that , it we 're going to end up greatly increasing the disk space that we use up , we also only have boards that will take up to sixteen channels and in this meeting , we ' ve got eight people and six mikes . and there we 're already using fourteen . phd a: if there 's a way to say time to solve each of these f those so suppose you can get an array in because there 's some person at berkeley who 's interested and has some equipment , and suppose we can as we save it we can , transfer it off to some other place that holds this data , who 's interested , and even if icsi it itself is n't . , and it seems like as long as we can time align the beginning , do we need to mix it with the rest ? i . ? the phd a: once you make the up front investment and can save it out each time , and not have to worry about the disk space factor , then it mi it might be worth having the data . professor b: i ' m not so much worried about disk space actually . i mentioned that , b as a practical matter , professor b: but the real issue is that , there is no way to do a recording extended to what we have now with low skew . so you would have a t completely separate set up , which would mean that the sampling times and would be all over the place compared to this . so it would depend on the level of pr processing you were doing later , but if you 're d i the person who 's doing array processing you actually care about funny little times . and and so you actually wou would want to have a completely different set up than we have , professor b: or a hun so , i ' m kinda skeptical , but that so , i do n't think we can share the resource in that way . but what we could do is if there was someone else who 's interested they could have a separate set up which they would n't be trying to synch with ours which might be useful for them . professor b: , we can o offer the meetings , and the physical space , and , the transcripts , and so on . phd a: ok . right , just it 'd be if we have more information on the same data . , but it 's if it 's impossible or if it 's a lot of effort then you have to just balance the two , professor b: , the thing will be , u in again , in talking to these other people to see what , what we can do . , we 'll see . grad h: yes , . but it 's exactly the same problem , that you have an infrastructure problem , you have a problem with people not wanting to be video taped , and you have the problem that no one who 's currently involved in the project is really hot to do it . phd a: right . internally , but i know there is interest from other places that are interested in looking at meeting data and having the video . so it 's just postdoc g: , w although i m i have to u mention the human subjects problems , that i increase with video . professor b: , so it 's , people getting shy about it . there 's this human subjects problem . there 's the fact that then , if i i ' ve heard comments about this before , why do n't you just put on a video camera ? but , it 's like saying , " , we 're primarily interested in some dialogue things , but , why do n't we just throw a microphone out there . " , once you actually have serious interest in any of these things then you actually have to put a lot of effort in . and , you really want to do it right . professor b: so nist or ldc , or somebody like that is much better shape to do all that . we there will be other meeting recordings . we wo n't be the only place doing meeting recordings . we are doing what we 're doing . and , hopefully it 'll be useful . grad h: i ' m pretty . i 'll get another one before the end of the meeting . postdoc g: transcriptions , ok . , about there are maybe three aspects of this . so first of all , i ' ve got eight transcribers . , seven of them are linguists . one of them is a graduate student in psychology . , each i gave each of them , their own data set . two of them have already finished the data sets . and the meetings run , let 's say an hour . sometimes as man much as an hour and a half . postdoc g: each each person got their own meeting . i did n't want to have any conflicts of , of when to stop transcribing this one or so i wanted to keep it clear whose data were whose , and so and , meetings , that they 're they go as long as a almost two hours in some cases . so , that means , if we ' ve got two already finished and they 're working on , right now all eight of them have differe , additional data sets . that means potentially as many as ten might be finished by the end of the month . hope so . but the pre - segmentation really helps a huge amount . postdoc g: and , also dan ellis 's innovation of the , the multi - channel to here really helped a r a lot in terms of clearing up h hearings that involve overlaps . but , just out of curiosity i asked one of them how long it was taking her , one of these two who has already finished her data set . she said it takes about , sixty minutes transcription for every five minutes of real time . so it 's about twelve to one , which is what we were thinking . postdoc g: ok . , these still , when they 're finished , that means that they 're finished with their pass through . they still need to be edited and all but but it 's word level , speaker change , the things that were mentioned . ok , now i wanted to mention the , teleconference i had with , jonathan fiscus . we spoke for an hour and a half and , had an awful lot of things in common . he , he in indicated to me that they ' ve that he 's been , looking , spending a lot of time with i ' m not quite the connection , but spending a lot of time with the atlas system . and i that , i need to read up on that . and there 's a web site that has lots of papers . but it looks to me like that 's the name that has developed for the system that bird and liberman developed for the annotated graphs approach . so what he wants me to do and what we will do and , is to provide them with the u already transcribed meeting for him to be able to experiment with in this atlas system . and they do have some software , at least that 's my impression , related to atlas and that he wants to experiment with taking our data and putting them in that format , and see how that works out . i explained to him in detail the , conventions that we 're using here in this word level transcript . and , , i explained , the reasons that we were not coding more elaborately and the focus on reliability . he expressed a lot of interest in reliability . it 's like he 's really up on these things . he 's he 's very , independently he asked , " what about reliability ? " so , he 's interested in the consistency of the encoding and that thing . ok , phd a: can you explain what the atlas i ' m not familiar with this atlas system . postdoc g: , at this point , adam 's read more in more detail than i have on this . i need to acquaint myself more with it . but , there is a way of viewing , whenever you have coding categories , and you 're dealing with , a taxonomy , then you can have branches that have alternative , choices that you could use for each of them . and it just ends up looking like a graphical representation . grad h: is is is atlas the his annotated transcription graph ? i do n't remember the acronym . the the one the what you 're referring to , they have this concept of an annotated transcription graph representation . grad h: and that 's what i based the format that i did i based it on their work almost directly , in combination with the tei . and so it 's very , very similar . and so it 's a data representation and a set of tools for manipulating transcription graphs of various types . phd e: is this the project that 's , between , nist and , a couple of other places ? the the postdoc g: - . then there 's their web site that has lots of papers . and i looked through them and they mainly had to do with this , tree structure , annotated tree diagram thing . so , and , in terms of like the conventions that i ' m a that i ' ve adopted , it there 's no conflict . and he was , very interested . and , " , and how 'd you handle this ? " and i said , " , this way " and and and we had a really conversation . , ok , now i also wanted to say in a different direction is , brian kingsbury . so , i corresponded briefly with him . i , c i he still has an account here . i told him he could ssh on and use multi - trans , and have a look at the already done , transcription . and he and he did . and what he said was that , what they 'll be providing is will not be as fine grained in terms of the time information . and , that 's , i need to get back to him and , , explore that a little bit more and see what they 'll be giving us in specific , phd e: the the folks that they 're , subcontracting out the transcription to , are they like court reporters postdoc g: , i get the sense they 're like that . like it 's like a pool of somewhat , secretarial i do n't think that they 're court reporters . i do n't think they have the special keyboards and that type of training . i get the sense they 're more secretarial . and that , , what they 're doing is giving them phd e: , so they 're hiring them , they 're coming . it 's not a service they send the tapes out to . grad h: they do send it out but my understanding is that 's all this company does is transcriptions for ibm for their speech product . postdoc g: and and what they 're doing is brian himself downloaded so , adam sent them a cd and brian himself downloaded , cuz , , we wanted to have it so that they were in familiar f terms with what they wanted to do . he downloaded from the cd onto audio tapes . and he did it one channel per audio tape . so each of these people is transcribing from one channel . and then what he 's going to do is check it , a before they go be beyond the first one . check it and , adjust it , and all that . grad h: , but that 's ok , because , you 'll do all them and then combine them . postdoc g: i have t i , i it would be difficult to do it that way . i really phd a: it 's hard just playing the , just having played the individual files . and , i know you . i your voice sounds like . i ' m familiar with , it 's pretty hard to follow , especially phd a: there are a lot of words that are so reduced phonetically that make sense when what the person was saying before . grad h: and the answer is we do n't actually know the answer because we have n't tried both ways . postdoc g: unless there 's a huge disparity in terms of the volume on the mix . in which case , they would n't be able to catch anything except the prominent channel , then they 'll switch between . phd a: actually , are th so are they giving any time markings ? in other words , if postdoc g: , i have to ask him . and that 's my email to him . that needs to be forthcoming . postdoc g: but but the , i did want to say that it 's hard to follow one channel of a conversation even if the people , and if you 're dealing furthermore with highly abstract network concepts you ' ve never heard of so , one of these people was transcribing the , networks group talk and she said , " i do n't really a lot of these abbreviations are , " but put them in parentheses cuz that 's the convention and cuz , if you phd e: given all of the effort that is going on here in transcribing why do we have i b m doing it ? why not just do it all ourselves ? professor b: , it 's historical . , some point ago we thought that , it " boy , we 'd really have to ramp up to do that " , professor b: , like we just did , and , here 's , a , collaborating institution that 's volunteered to do it . so , that was a contribution they could make . in terms of time , money , ? and it still might be a good thing professor b: we 'll see . , th , they ' ve proceeded along a bit . let 's see what comes out of it , and , , have some more discussions with them . postdoc g: it 's very a real benefit having brian involved because of his knowledge of what the how the data need to be used and so what 's useful to have in the format . grad h: so , liz , with the sri recognizer , can it make use of some time marks ? phd a: and actually i should say this is what don has b , he 's already been really helpful in , chopping up these so so first of all you , for the sri front - end , we really need to chop things up into pieces that are f not too huge . , but second of all , in general because some of these channels , i 'd say , like , i , at least half of them probably on average are g are ha are have a lot of cross - ta , some of the segments have a lot of cross - talk . , it 's good to get short segments if you 're gon na do recognition , especially forced alignment . , don has been taking a first stab actually using jane 's first the fir the meeting that jane transcribed which we did have some problems with , and thilo , told me why this was , but that people were switching microphones around in the very beginning , the sri re phd c: no , th . no . they they were not switching them but what they were adjusting them , phd a: so we have to normalize the front - end and , and have these small segments . phd a: so we ' ve taken that and chopped it into pieces based always on your , cuts that you made on the mixed signal . and so that every speaker has the same cuts . and if they have speech in it we run it through . and if they do n't have speech in it we do n't run it through . and we base that knowledge on the transcription . phd a: , the problem is if we have no time marks , then for forced alignment we actually where , in the signal the transcriber heard that word . and so phd a: , if it 's a whole conversation and we get a long , , par paragraph of talk , phd e: you would need to like a forced alignment before you did the chopping , right ? phd a: no , we used the fact that so when jane transcribes them the way she has transcribers doing this , whether it 's with the pre - segmentation or not , phd a: they have a chunk and then they transcribes the words in the chunk . and maybe they choose the chunk or now they use a pre - segmentation and then correct it if necessary . but there 's first a chunk and then a transcription . then a chunk , then a transcription . that 's great , cuz the recognizer can phd a: right , and it helps that it 's made based on heuristics and human ear . phd a: th - but there 's going to be a real problem , even if we chop up based on speech silence these , the transcripts from i b m , we do n't actually know where the words were , which segment they belonged to . so that 's what i ' m worried about right now . phd a: even if you do it on something really long you need to know you can always chop it up but you need to have a reference of which words went with which , chop . postdoc g: now was n't that one of the proposals was that ibm was going to do an initial forced alignment , after they professor b: , i ' m they will and so we have to have a dialogue with them about it . , it sounds like liz has some concerns phd a: maybe they have some , maybe actually there is some , even if they 're not fine grained , maybe the transcribers , i , maybe it 's saved out in pieces . that would help . but , it 's just an unknown right now . phd a: right . but the it is true that the segments i have n't tried the segments that thilo gave you but the segments that in your first meeting are great . , that 's a good length . postdoc g: , i was thinking it would be fun to , if you would n't mind , to give us a pre - segmentation . postdoc g: , maybe you have one already of that first m of the meeting that , the first transcribed meeting , the one that i transcribed . phd c: but that 's the one where we 're , trai training on , so that 's a little bit phd a: and actually as you get transcripts just , for new meetings , we can try , the more data we have to try the alignments on , the better . so it 'd be good for just to know as transcriptions are coming through the pipeline from the transcribers , just to we 're playing around with , parameters f on the recognizer , cuz that would be helpful . especially as you get , en more voices . postdoc g: , liz and i spoke d w at some length on tuesday and i was planning to do just a preliminary look over of the two that are finished and then give them to you . professor b: that 's great . i the other thing , i ca n't remember if we discussed this in the meeting but , i know you and i talked about this a little bit , there was an issue of , suppose we get in the , i it 's enviable position although maybe it 's just saying where the weak link is in the chain , where we , we have all the data transcribed and we have these transcribers and we were we 're the we 're still a bit slow on feeding at that point we ' ve caught up and the , the weak link is recording meetings . two questions come , is what how do we , it 's not really a problem at the moment cuz we have n't reached that point but how do we step out the recorded meetings ? and the other one is , , is there some good use that we can make of the transcribers to do other things ? so , i ca n't remember how much we talked about this in this meeting but there was postdoc g: and there is one use that also we discussed which was when , dave finishes the and maybe it 's already finished the modification to multi - trans which will allow fine grained encoding of overlaps . , then it would be very these people would be very good to shift over to finer grain encoding of overlaps . it 's just a matter of , providing so if right now you have two overlapping segments in the same time bin , with the improvement in the database in the , , in the interface , it 'd be possible to , , just do a click and drag thing , and get the , the specific place of each of those , the time tag associated with the beginning and end of each segment . professor b: right , so we talking about three level three things . one one was , we had s had some discussion in the past about some very high level labelings , professor b: types of overlaps , and that someone could do . second was , somewhat lower level just doing these more precise timings . and the third one is , just a completely wild hair brained idea that i have which is that , if we have time and people are able to do it , to take some subset of the data and do some very fine grained analysis of the speech . , marking in some overlapping potentially overlapping fashion , the value of , ar articulatory features . , just say , ok , it 's voiced from here to here , there 's it 's nasal from here to here , and . , as opposed to doing phonetic , phonemic and the phonetic analysis , and , assuming , articulatory feature values for those things . , that 's extremely time - consuming . postdoc g: also if you 're dealing with consonants that would be easier than vowels , would n't it ? , i would think that , being able to code that there 's a fricative extending from here to here would be a lot easier than classifying precisely which vowel that was . professor b: , but also it 's just the issue that when you look at the u w u when you look at switchboard very close up there are places where whether it 's a consonant or a vowel you still have trouble calling it a particular phone at that point professor b: now i ' m suggesting articulatory features . maybe there 's even a better way to do it but that 's , a traditional way of describing these things , and , actually this might be a g neat thing to talk to professor b: , it 's still some categories but something that allows for overlapping change of these things and then this would give some more ground work for people who were building statistical models that allowed for overlapping changes , different timing changes as opposed to just " click , you 're now in this state , which corresponds to this speech sound " and so on . professor b: , something like that . , actually if we get into that it might be good to , haul john ohala into this and ask his views on it . phd a: like so that you can do far field studies of those gestures or , or is it because you think there 's a different actual production in meetings that people use ? or ? professor b: no , i think it 's for that purpose i ' m just viewing meetings as being a neat way to get people talking naturally . and then you have i and then it 's natural in all senses , professor b: in the sense that you have microphones that are at a distance that , one might have , and you have the close mikes , and you have people talking naturally . and the overlap is just indicative of the fact that people are talking naturally , right ? so so that given that it 's that corpus , if it 's gon na be a very useful corpus , if you say w ok , we ' ve limited the use by some of our , censored choices , we do n't have the video , we do n't and , but there 's a lot of use that we could make of it by expanding the annotation choices . and , most of the things we ' ve talked about have been fairly high level , and being a bottom - up person maybe we 'd , do some of the others . postdoc g: it 's a balance . that would be really to offer those things with that wide range . professor b: , people did n't , people have made a lot of use of timit and , w due to its markings , and then the switchboard transcription thing , has been very useful for a lot of people . phd a: that 's true . i wanted to , make a pitch for trying to collect more meetings . i actually i talked to chuck fillmore and they ' ve what , vehemently said no before but this time he was n't vehement and he said , " , liz , come to the meeting tomorrow and try to convince people " . so i ' m gon na try . go to their meeting tomorrow and see if we can try , to convince them phd a: because they have and they have very interesting meetings from the point of view of a very different type of talk than we have here phd a: , yes and in terms of the fact that they 're describing abstract things and , just dialogue - wise , so i 'll try . and then the other thing is , i if this is useful , but i asked lila if maybe go around and talk to the different departments in this building to see if there 's any groups that , for a free lunch , if we can still offer that , might be willing grad h: but the problem is so much of their is confidential . it would be very hard for them . postdoc g: also it does seem like it takes us way out of the demographic . , it seems like we had this idea before of having like linguistics students brought down for free lunches phd a: right , and then we could also we might try advertising again because it 'd be good if we can get a few different non - internal types of meetings and just also more data . phd a: so i actually wrote to him and he answered , " great , that sounds really interesting " . but i never heard back because we did n't actually advertise openly . we a w i told i d asked him privately . , and it is a little bit of a trek for campus folks . grad h: but , it would be if we got someone other than me who knew how to set it up and could do the recording so u i did n't have to do it each time . phd a: and i ' m willing to try to learn . , i ' m i would do my best . , the other thing is that there was a number of things at the transcription side that , transcribers can do , like dialogue act tagging , phd a: disfluency tagging , things that are in the speech that are actually something we 're y working on for language modeling . and mari 's also interested in it , andreas as . so if you wanna process a utterance and the first thing they say is , " , and that " is coded as some interrupt u tag . , and things like that , th phd a: great . so a lot of this there 's a second pass and i do n't really would exist in it . but there 's definitely a second pass worth doing to maybe encode some kinds of , is it a question or not , or , that maybe these transcribers could do . postdoc g: , i wanted to whi while we 're , so , to return just briefly to this question of more meeting data , i have two questions . one of them is , jerry feldman 's group , they , are they i know that they recorded one meeting . are they willing ? professor b: there 's we should go beyond , icsi but , there 's a lot of happening at icsi that we 're not getting now that we could . professor b: no , no . no . so th there was the thing in fillmore 's group but even there he had n't what he 'd said " no " to was for the main meeting . but they have several smaller meetings a week , and , the notion was raised before that could happen . and it just , it just did n't come together phd e: , and the other thing too is when they originally said " no " they did n't know about this post - editing capability thing . professor b: , so there 's possibilities there . jerry 's group , yes . , there 's , the networks group , i do n't do they still meeting regularly or ? grad h: he was my contact , so need to find out who 's running it now . phd a: and this it sounds bizarre but , i 'd really like to look at to get some meetings where there 's a little bit of heated discussion , like ar arguments and or emotion , and things like that . and so i was thinking if there 's any like berkeley political groups . , that 'd be perfect . some group , " yes , we must " phd a: , ok . no , but maybe stu student , groups or , film - makers , or som something a little bit colorful . professor b: . , th there 's a problem there in terms of , the commercial value of st , postdoc g: there is this problem though , that if we give them the chance to excise later we e might end up with like five minutes out of a f of m one hour phd a: but just something with some more variation in prosodic contours and would be neat . so if anyone has ideas , i ' m willing to do the leg work to go try to talk to people but i do n't really know which groups are worth pursuing . postdoc g: and i had one other aspect of this which is , , jonathan fiscus expressed primar y a major interest in having meetings which were all english speakers . now he was n't trying to shape us in terms of what we gather but that 's what he wanted me to show him . so i ' m giving him our , our initial meeting because he asked for all english . and we do n't have a lot of all english meetings right now . phd a: i was thinking , knowing the , n national institute of standards , it is all professor b: i remember a study that bbn did where they trained on this was in wall street journal days , they trained on american english and then they tested on , different native speakers from different areas . and , , the worst match was people whose native tongue was mandarin chinese . the second worst was british english . professor b: it was swiss w , so it 's so , if he 's thinking in terms of recognition technology i he would probably want , american english , postdoc g: that the feldman 's meetings tend to be more that way , are n't they ? , i feel like they have grad h: and maybe there are a few of with us where it was , dan was n't there and before jose started coming , and professor b: so , what about people who involved in some artistic endeavor ? , film - making like that . phd a: something where there is actually discussion where there 's no right or wrong answer but it 's a matter of opinion thing . , anyway , if you have ideas phd a: and that has a chance to give us some very interesting fun data . if anyone has ideas , if any groups that are m , grad h: the business school . , the business school might be good . i actually spoke with some students up there and they expressed willingness back when they thought they would be doing more with speech . grad h: but when they lost interest in speech they also stopped answering my email about other , so . grad f: i heard that at cal tech they have a special room someone said that they had a special room to get all your frustrations out that you can go to and like throw things and break things . professor b: , but we do n't want them to throw the far field mikes is the thing . grad h: there was a dorm room at tech that , someone had coated the walls and the ceiling , and , the floor with mattresses . the entire room . professor b: i had as my fourth thing here processing of wave forms . what did we mean by that ? remember @ ? grad h: , liz wanted to talk about methods of improving accuracy by doing pre - processing . phd a: but that , it would be helpful if stay in the loop somehow with , people who are doing any post - processing , whether it 's to separate speakers or to improve the signal - to - noise ratio , or both , that we can try out as we 're running recognition . , so , i is that who else is work i dan ellis and you professor b: he 's interested we 're look starting to look at some echo cancellation things . which professor b: t no , so no , i w wha what you want when you 're saying improving the wave form you want the close talking microphone to be better . professor b: and the question is to w to what extent is it getting hurt by , by any room acoustics or is it just , given that it 's close it 's not a problem ? phd a: it does n't seem like big room acoustics problems to my ear but i ' m not an expert . it seems like a problem with cross - talk . phd a: all i meant is just that as this pipeline of research is going on we 're also experimenting with different asr , techniques . and so it 'd be w good to know about it . phd e: so the problem is like , on the microphone of somebody who 's not talking they 're picking up signals from other people and that 's causing problems ? phd a: r right , although if they 're not talking , using the inhouse transcriptions , were o k because the t no one transcribed any words there and we throw it out . but if they 're talking and they 're not talking the whole time , so you get some speech and then a " - " , and some more speech , so that whole thing is one chunk . and the person in the middle who said only a little bit is picking up the speech around it , that 's where it 's a big problem . postdoc g: , this does like seem like it would relate to some of what jose 's been working on as , the encoding of the and and he also , he was postdoc g: i was t i was trying to remember , you have this interface where you i you ha you showed us one time on your laptop that you had different visual displays as speech and nonspeech events . phd d: c . may i only display the different colors for the different situation . but , for me and for my problems , is enough . because , it 's possible , in a simp sample view , to , nnn , to compare with c with the segment , the assessment what happened with the different parameters . and only with a different bands of color for the , few situation , i consider for acoustic event is enough to @ . i see that , you are considering now , a very sophisticated , ehm , @ set of , graphic s , ehm , si symbols to transcribe . no ? because , before , you are talking about the possibility to include in the transcriber program , a set of symbols , of graphic symbol to t to mark the different situations during the transcription postdoc g: , you 're saying so , symbols for differences between laugh , and sigh , and slam the door and ? postdoc g: , i would n't say symbols so much . the the main change that i see in the interface is just that we 'll be able to more finely c , time things . but i also st there was another aspect of your work that i was thinking about when i was talking to you which is that it sounded to me , liz , as though you and , maybe i did n't q understand this , but it sounded to me as though part of the analysis that you 're doing involves taking segments which are of a particular type and putting them together . and th so if you have like a p a s , speech from one speaker , then you cut out the part that 's not that speaker , and you combine segments from that same speaker to and run them through the recognizer . is that right ? phd a: we try to find as close of start and end time of as we can to the speech from an individual speaker , because then we 're more guaranteed that the recognizer will for the forced alignment which is just to give us the time boundaries , because from those time boundaries then the plan is to compute prosodic features . and the more space you have that is n't the thing you 're trying to align the more errors we have . , so , that it would help to have either pre - processing of a signal that creates very good signal - to - noise ratio , phd a: which i how possible this is for the lapel , or to have very to have closer , time , synch times , around the speech that gets transcribed in it , or both . and it 's just a open world right now of exploring that . so wanted to see , on the transcribing end from here things look good . , the ibm one is more it 's an open question right now . and then the issue of like global processing of some signal and then , before we chop it up is yet another way we can improve things in that . phd e: what about increasing the flexibility of the alignment ? do you remember that thing that michael finka did ? that experiment he did a while back ? phd a: right . you can , the problem is just that the acoustic when the signal - to - noise ratio is too low , you 'll get , a an alignment with the wrong duration pattern or it phd e: , so that 's the problem , is the signal - to - noise ratio . phd a: . it 's not the fact that you have like , what he did is allow you to have , words that were in another segment move over to the at the edges of segmentations . phd a: right , things near the boundaries where if you got your alignment wrong cuz what they had done there is align and then chop . , and this problem is a little bit j more global . it 's that there are problems even in inside the alignments , because of the fact that there 's enough acoustic signal there t for the recognizer to eat , as part of a word . and it tends to do that . s so , but we probably will have to do something like that in addition . anyway . so , bottom line is just i wanted to make be aware of whoever 's working on these signal - processing techniques for , detecting energies , because that 'll really help us . professor b: o k , tea has started out there i suggest we c run through our digits and , ok , we 're done .
the berkeley meeting recorder group discussed recording equipment and setup issues , recent developments in the transcription effort , other potential types of tagging to be assigned to transcribers , and the post-processing of waveforms. the discussion was largely focused on efforts to facilitate transcriptions , including the improvement of strategies for transcribing overlapping speech , and achieving greater uniformity in the type of equipment used during recordings and the manner in which recording devices are worn by speakers. to achieve greater uniformity in across-speaker recording conditions , the group decided to purchase three additional head-mounted microphones. future work will include recording more varied meeting data from non-icsi discussion groups. it was proposed by speaker fe016 that better communication be established between researchers involved in post-processing of the waveform and asr. use of dissimilar microphones adds an extra , unwanted variable to individual speaker recordings. similarly , differences in the type of recording equipment used and the manner in which microphones are worn by speakers causes problems for the transcription effort. setting up a microphone array and performing video recordings ( in a possible collaboration with nist ) are problematic due to the types of changes in infrastructure they require. ibm's single-channel approach to transcriptions may pose problems for the post-processing of waveforms and forced alignments , as the group foresees difficulties in referencing chopped segments back to the original times/locations from which they were extracted. another post-processing problem involves cross-talk , and , in particular , situations in which a speaker whose contributions to the discussion are relatively sparse but whose microphone picks up signals from the other speakers. modifications are being made to multi-trans to enable tight time markings at the boundaries of overlapping speech segments , and facilitate the transcription of such segments. pre-segmentation continues to be very beneficial to the transcription effort. work by speaker mn014 is in progress to compare the energies of different channels for detecting speech/non-speech portions , facilitating transcriptions,and potentially providing speaker identification information. efforts are being developed to create a cross-correlation setup linking recorded data with a map of where individual speakers were seated. the transcriber pool has been performing within the expected range of work completed per the amount of time spent transcribing. ibm has a team of people employed to transcribe meeting data , and who are transcribing single versus multiple channels. the group discussed the potential for assigning additional tasks to icsi's transcriber pool , including tagging more fine-grained acoustic information , and discourse and disfluency tagging.
###dialogue: professor b: so , we have n't sent around the agenda . so , i , any agenda items anybody has , wants to talk about , what 's going on ? phd a: , i had a just a quick question but i know there was discussion of it at a previous meeting that i missed , but just about the wish list item of getting good quality close - talking mikes on every speaker . professor b: ok , so let 's so let 's just do agenda building right now . ok , so let 's talk about that a bit . professor b: , @ tuss close talking mikes , better quality . ok , we can talk about that . you were gon na starting to say something ? postdoc g: , you , already know about the meeting that 's coming up and i if this is appropriate for this . i . , maybe it 's something we should handle outside of the meeting . professor b: we can so we can ta so n nist is nist folks are coming by next week professor b: and , george doddington will be around as . , ok , so we can talk about that . , hear about how things are going with , the transcriptions . that 's right . professor b: that would sorta be an obvious thing to discuss . , an - anything else , strike anybody ? phd a: , we started running recognition on one conversation but it 's the r is n't working yet . so , but if anyone has phd a: , the main thing would be if anyone has , knowledge about ways to , post - process the wave forms that would give us better recognition , that would be helpful to know about . professor b: alright , that seems like a good collection of things . and we 'll undoubtedly think of other things . postdoc g: i had thought under my topic that i would mention the , four items that i , put out for being on the agenda f on that meeting , which includes like the pre - segmentation and the developments in multitrans . professor b: alright , why do n't we start off with this , u i the order we brought them up seems fine . so , better quality close talking mikes . so the one issue was that the , lapel mike , is n't as good as you would like . and so , it 'd be better if we had close talking mikes for everybody . right ? phd a: , the and actually in addition to that , that the close talking mikes are worn in such a way as to best capture the signal . and the reason here is just that for the people doing work not on microphones but on like dialogue and , or and even on prosody , which don is gon na be working on soon , it adds this extra , vari variable for each speaker to deal with when the microphones are n't similar . phd a: so and i also talked to mari this morning and she also had a strong preference for doing that . and she said that 's useful for them to know in starting to collect their data too . professor b: , one thing i was gon na say was that , i we could get more , of the head mounted microphones even beyond the number of radio channels we have because whether it 's radio or wire is probably second - order . and the main thing is having the microphone close to you , grad h: so it 's towards the corner of your mouth so that breath sounds do n't get on it . and then just about , a thumb or a thumb and a half away from your mouth . phd a: but if we could actually standardize , the microphones , as much as possible that would be really helpful . professor b: , it does n't hurt to have a few extra microphones around , so why do n't we just go out and get an order of if this microphone seems ok to people , i 'd just get a half dozen of these things . grad h: the onl the only problem with that is right now , some of the jimlets are n't working . the little the boxes under the table . and so , w , i ' ve only been able to find three jacks that are working . phd a: but y we could just record these signals separately and time align them with the start of the meeting . professor b: right now , we ' ve got , two microphones in the room , that are not quote - unquote standard . so why do n't we replace those professor b: , however many we can plug in . , if we can plug in three , let 's plug in three . professor b: also what we ' ve talked before about getting another , radio , and so then that would be , three more . professor b: so , so we should go out to our full complement of whatever we can do , but have them all be the same mike . the original reason that it was done the other way was because , it w it was an experimental thing and i do n't think anybody knew whether people would rather have more variety or , more uniformity , but @ but , sounds fine . phd a: , for short term research it 's just there 's just so much effort that would have to be done up front n , so , uniformity would be great . phd e: is it because you you 're saying the for dialogue purposes , so that means that the transcribers are having trouble with those mikes ? is that what you mean ? postdoc g: a couple times , so , , the transcribers notice and there 're some where , ugh , there 's it 's the double thing . it 's the equipment and also how it 's worn . and he 's always they just rave about how wonderful adam 's channel is . postdoc g: , but it 's not just that , it 's also you it 's also like n no breathing , no , it 's like it 's , it 's really { nonvocalsound } it makes a big difference from the transcribers ' point of view grad h: , that the point of doing the close talking mike is to get a good quality signal . we 're not doing research on close talking mikes . so we might as get it as uniform as we can . professor b: now , this is locking the barn door after the horse was stolen . we do have thirty hours , of speech , which is done this way . professor b: but but , , for future ones we can get it a bit more uniform . postdoc g: and there was some talk about , maybe the h headphones that are uncomfortable for people , to grad h: so , as i said , we 'll do a field trip and see if we can get all of the same mike that 's more comfortable than these things , which are horrible . professor b: ok , second item was the , nist visit , and what 's going on there . postdoc g: ok , so , , jonathan fiscus is coming on the second of february and i ' ve spoken with , u a lot of people here , not everyone . , and , he expressed an interest in seeing the room and in , seeing a demonstration of the modified multitrans , which i 'll mention in a second , and also , he was interested in the pre - segmentation and then he 's also interested in the transcription conventions . and , so , it seems to me in terms of like , i it wou , ok . so the room , it 's things like the audio and c and audi audio and acoustic properties of the room and how it how the recordings are done , and that thing . and , . ok , in terms of the multi - trans , that 's being modified by dave gelbart to , handle multi - channel recording . grad h: , i should ' ve i was just thinking i should have invited him to this meeting . i forgot to do it . postdoc g: that 's ok , we 'll , and it 's t and it looks really great . he he has a prototype . i , @ did n't see it , yesterday but i ' m going to see it today . and , that 's that will enable us to do , tight time marking of the beginning and ending of overlapping segments . at present it 's not possible with limitations of the , original design of the software . and . so , i . in terms of , like , pre - segmentation , that continues to be , a terrific asset to the transcribers . do you i know that you 're al also supplementing it further . do you want to mention something about that c thilo , or ? phd c: , . what what i ' m doing right now is i ' m trying to include some information about which channel , there 's some speech in . but that 's not working at the moment . i ' m just trying to do this by comparing energies , normalizing energies and comparing energies of the different channels . and so to give the transcribers some information in which channel there 's speech in addition to the thing we did now which is just , speech - nonspeech detection on the mixed file . so i ' m relying on the segmentation of the mixed file phd c: but i ' m trying to subdivide the speech portions into different portions if there is some activity in different channels . professor b: , something i did n't put in the list but , on that , same day later on in or maybe it 's no , actually it 's this week , dave gelbart and i will be , visiting with john canny who i , is a cs professor , who 's interested in ar in array microphones . professor b: and so we wanna see what commonality there is here . , maybe they 'd wanna stick an array mike here when we 're doing things professor b: but they might wanna just , , you could imagine them taking the four signals from these table mikes and trying to do something with them , i also had a discussion so , w , we 'll be over there talking with him , after class on friday . , we 'll let what goes with that . also had a completely unrelated thing . i had a , discussion today with , birger kollmeier who 's a , a german , scientist who 's got a fair sized group doing a range of things . it 's auditory related , largely for hearing aids and so on . but , he does with auditory models and he 's very interested in directionality , and location , and , head models and microphone things . and so , he 's he and possibly a student , there w there 's , a student of his who gave a talk here last year , may come here , in the fall for , a five month , sabbatical . so he might be around . get him to give some talks and so on . but anyway , he might be interested in this . phd e: that that reminds me , i had a thought of an interesting project that somebody could try to do with the data from here , either using , the mikes on the table or using signal energies from the head worn mikes , and that is to try to construct a map of where people were sitting , based on professor b: e given that , the block of wood with the two mikes on either side , if i ' m speaking , or if you 're speaking , or someone over there is speaking , it if you look at cross - correlation functions , you end up with a if someone who was on the axis between the two is talking , then you get a big peak there . and if someone 's talking on , one side or the other , it goes the other way . and then , it even looks different if th t if the two people on either side are talking than if one in the middle . it it actually looks somewhat different , so . phd e: i was just thinking , as i was sitting here next to thilo that , when he 's talking , my mike probably picks it up better than your guys 's mikes . so if you just looked at phd e: , looked at the energy on my mike and you could get an idea about who 's closest to who . professor b: , you have to the appropriate normalizations are tricky , and are probably the key . phd a: you just search for adam 's voice on each individual microphone , you know where everybody 's sitting . professor b: . we ' ve switched positions recently so you ca n't anyway . so those are just a little couple of news items . postdoc g: can i ask one thing ? , so , jonathan fiscus expressed an interest in , microphone arrays . , is there b and i also want to say , his he ca n't stay all day . he needs to , leave for , from here to make a two forty - five flight postdoc g: from oakland . so it makes the scheduling a little bit tight but do you think that , i john canny should be involved in this somehow or not . i have no idea . professor b: probably not but i 'll know better after i see him this friday what level he wants to get involved . professor b: , he might be excited to and it might be very appropriate for him to , or he might have no interest whatsoever . really . grad h: is he involved in ach ! i ' m blanking on the name of the project . nist has done a big meeting room instrumented meeting room with video and microphone arrays , and very elaborate software . is is he the one working on that ? professor b: that 's what they 're starting up . no , that 's what all this is about . they they have n't done it yet . they wanted to do it professor b: , they ' ve instrumented a room but i do n't think they have n't started recordings yet . they do n't have the t the transcription standards . they do n't have the grad h: , cuz what i had read was , they had a very large amount of software infrastructure for coordinating all this , both in terms of recording and also live room where you 're interacting the participants are interacting with the computer , and with the video , and lots of other . professor b: , i ' m not . all all i know is that they ' ve been talking to me about a project that they 're going to start up recording people meet in meetings . and , it is related to ours . they were interested in ours . they wanted to get some uniformity with us , about the transcriptions and so on . professor b: and one notable difference u actually i ca n't remember whether they were going to routinely collect video or not , but one , difference from the audio side was that they are interested in using array mikes . so , , i 'll just tell you the party line on that . the reason i did n't go for that here was because , the focus , both of my interest and of adam 's interest was , in impromptu situations . and we 're not recording a bunch of impromptu situations but that 's because it 's different to get data for research than to actually apply it . and so , for scientific reasons we thought it was good to instrument this room as we wanted it . but the thing we ultimately wanted to aim at was a situation where you were talking with , one or more other people i , in an p impromptu way , where you did n't actually the situation was going to be . and therefore it would not it 'd be highly unlikely that room would be outfitted with some very carefully designed array of microphones . , so it was only for that reason . it was just , yet another piece of research and it seemed like we had enough troubles just professor b: no . so there 's , there 's a whole range of things there 's a whole array of things , that people do on this . so , the big arrays , places , like , rutgers , and brown , and other places , they have , big arrays with , i , a hundred mikes . professor b: and so there 's a wall of mikes . and you get really , really good beam - forming with that thing . and it 's and , at one point we had a proposal in with rutgers where we were gon na do some of the per channel signal - processing and they were gon na do the multi - channel , but it d we ended up not doing it . phd e: i ' ve seen demonstrations of the microphone arrays . it 's amazing how they can cut out noise . professor b: , our block of wood is unique . but the no , there are these commercial things now you can buy that have four mikes and , so , there 's a range of things that people do . , so if we connected up with somebody who was interested in doing that thing that 's a good thing to do . , whenever i ' ve described this to other people who are interested on the with the acoustic side that 's invariably the question they ask . just like someone who is interested in the general dialogue thing will always ask " , are you recording video ? " professor b: and and the acoustic people will always say , " are you doing , array microphones ? " so it 's a good thing to do , but it does n't solve the problem of how do you solve things when there 's one mike or at best two mikes in this imagined pda that we have . so maybe we 'll do some more of it . postdoc g: one thing , i . , i know that having an array of , i would imagine it would be more expensive to have a an array of microphones . but could n't you approximate the natural sis situation by just shutting off , channels when you 're later on ? , it seems like if the microphones do n't effect each other then could n't you just , record them with an array and then just not use all the data ? grad h: it 's it 's just a lot of infrastructure that for our particular purpose we felt we did n't need to set up . professor b: , if ninety - nine percent of what you 're doing is c is shutting off most of the mikes , then going through the but if you get somebody who 's who has that as a primary interest then that put then that drives it in that direction . grad h: that 's right , if someone came in and said we really want to do it , we do n't care . that would be fine , phd e: so to save that data you you have to have one channel recording per mike in the array ? professor b: there 's i what they 're going to do and i how big their array is . if you were gon na save all of those channels for later research you 'd use up a lot of space . and , th grad h: their software infrastructure had a very elaborate design for plugging in filters , and mixers , and all sorts of processing . so that they can do in real time and not save out each channel individually . professor b: but , for optimum flexibility later you 'd want to save each channel . but in practical situations you would have some engine of some sort doing some processing to reduce this to some to the equivalent of a single microphone that was very directional . phd a: it seems to me that there 's , there are good political reasons for doing this , just getting the data , because there 's a number of sites like right now sri is probably gon na invest a lot of internal funding into recording meetings also , which is good , but they 'll be recording with video and they 'll be , it 'd be if we can have at least , make use of the data that we 're recording as we go since it 's this is the first site that has really collected these really impromptu meetings , and just have this other information available . so , if we can get the investment in just for the infra infrastructure and then , i , save it out or have whoever 's interested save that data out , transfer it there , it 'd be g it 'd be good to have the recording . phd a: i ' m not about video . that 's an video has a little different nature since right n right now we 're all being recorded but we 're not being taped . , but it definitely in the case of microphone arrays , since if there was a community interested in this , then grad h: , but we need a researcher here who 's interested in it . to push it along . professor b: see the problem is it took , it took at least six months for dan to get together the hardware and the software , and debug in the microphones , and in the boxes . and it was a really big deal . and so we could get a microphone array in here pretty easily and , have it mixed to one channel of some sort . but , e for , how we 're gon na decide for for maximum flexibility later you really do n't want to end up with just one channel that 's pointed in the direction of the p the person with the maximum energy like that . , you want actually to have multiple channels being recorded so that you can and to do that , it we 're going to end up greatly increasing the disk space that we use up , we also only have boards that will take up to sixteen channels and in this meeting , we ' ve got eight people and six mikes . and there we 're already using fourteen . phd a: if there 's a way to say time to solve each of these f those so suppose you can get an array in because there 's some person at berkeley who 's interested and has some equipment , and suppose we can as we save it we can , transfer it off to some other place that holds this data , who 's interested , and even if icsi it itself is n't . , and it seems like as long as we can time align the beginning , do we need to mix it with the rest ? i . ? the phd a: once you make the up front investment and can save it out each time , and not have to worry about the disk space factor , then it mi it might be worth having the data . professor b: i ' m not so much worried about disk space actually . i mentioned that , b as a practical matter , professor b: but the real issue is that , there is no way to do a recording extended to what we have now with low skew . so you would have a t completely separate set up , which would mean that the sampling times and would be all over the place compared to this . so it would depend on the level of pr processing you were doing later , but if you 're d i the person who 's doing array processing you actually care about funny little times . and and so you actually wou would want to have a completely different set up than we have , professor b: or a hun so , i ' m kinda skeptical , but that so , i do n't think we can share the resource in that way . but what we could do is if there was someone else who 's interested they could have a separate set up which they would n't be trying to synch with ours which might be useful for them . professor b: , we can o offer the meetings , and the physical space , and , the transcripts , and so on . phd a: ok . right , just it 'd be if we have more information on the same data . , but it 's if it 's impossible or if it 's a lot of effort then you have to just balance the two , professor b: , the thing will be , u in again , in talking to these other people to see what , what we can do . , we 'll see . grad h: yes , . but it 's exactly the same problem , that you have an infrastructure problem , you have a problem with people not wanting to be video taped , and you have the problem that no one who 's currently involved in the project is really hot to do it . phd a: right . internally , but i know there is interest from other places that are interested in looking at meeting data and having the video . so it 's just postdoc g: , w although i m i have to u mention the human subjects problems , that i increase with video . professor b: , so it 's , people getting shy about it . there 's this human subjects problem . there 's the fact that then , if i i ' ve heard comments about this before , why do n't you just put on a video camera ? but , it 's like saying , " , we 're primarily interested in some dialogue things , but , why do n't we just throw a microphone out there . " , once you actually have serious interest in any of these things then you actually have to put a lot of effort in . and , you really want to do it right . professor b: so nist or ldc , or somebody like that is much better shape to do all that . we there will be other meeting recordings . we wo n't be the only place doing meeting recordings . we are doing what we 're doing . and , hopefully it 'll be useful . grad h: i ' m pretty . i 'll get another one before the end of the meeting . postdoc g: transcriptions , ok . , about there are maybe three aspects of this . so first of all , i ' ve got eight transcribers . , seven of them are linguists . one of them is a graduate student in psychology . , each i gave each of them , their own data set . two of them have already finished the data sets . and the meetings run , let 's say an hour . sometimes as man much as an hour and a half . postdoc g: each each person got their own meeting . i did n't want to have any conflicts of , of when to stop transcribing this one or so i wanted to keep it clear whose data were whose , and so and , meetings , that they 're they go as long as a almost two hours in some cases . so , that means , if we ' ve got two already finished and they 're working on , right now all eight of them have differe , additional data sets . that means potentially as many as ten might be finished by the end of the month . hope so . but the pre - segmentation really helps a huge amount . postdoc g: and , also dan ellis 's innovation of the , the multi - channel to here really helped a r a lot in terms of clearing up h hearings that involve overlaps . but , just out of curiosity i asked one of them how long it was taking her , one of these two who has already finished her data set . she said it takes about , sixty minutes transcription for every five minutes of real time . so it 's about twelve to one , which is what we were thinking . postdoc g: ok . , these still , when they 're finished , that means that they 're finished with their pass through . they still need to be edited and all but but it 's word level , speaker change , the things that were mentioned . ok , now i wanted to mention the , teleconference i had with , jonathan fiscus . we spoke for an hour and a half and , had an awful lot of things in common . he , he in indicated to me that they ' ve that he 's been , looking , spending a lot of time with i ' m not quite the connection , but spending a lot of time with the atlas system . and i that , i need to read up on that . and there 's a web site that has lots of papers . but it looks to me like that 's the name that has developed for the system that bird and liberman developed for the annotated graphs approach . so what he wants me to do and what we will do and , is to provide them with the u already transcribed meeting for him to be able to experiment with in this atlas system . and they do have some software , at least that 's my impression , related to atlas and that he wants to experiment with taking our data and putting them in that format , and see how that works out . i explained to him in detail the , conventions that we 're using here in this word level transcript . and , , i explained , the reasons that we were not coding more elaborately and the focus on reliability . he expressed a lot of interest in reliability . it 's like he 's really up on these things . he 's he 's very , independently he asked , " what about reliability ? " so , he 's interested in the consistency of the encoding and that thing . ok , phd a: can you explain what the atlas i ' m not familiar with this atlas system . postdoc g: , at this point , adam 's read more in more detail than i have on this . i need to acquaint myself more with it . but , there is a way of viewing , whenever you have coding categories , and you 're dealing with , a taxonomy , then you can have branches that have alternative , choices that you could use for each of them . and it just ends up looking like a graphical representation . grad h: is is is atlas the his annotated transcription graph ? i do n't remember the acronym . the the one the what you 're referring to , they have this concept of an annotated transcription graph representation . grad h: and that 's what i based the format that i did i based it on their work almost directly , in combination with the tei . and so it 's very , very similar . and so it 's a data representation and a set of tools for manipulating transcription graphs of various types . phd e: is this the project that 's , between , nist and , a couple of other places ? the the postdoc g: - . then there 's their web site that has lots of papers . and i looked through them and they mainly had to do with this , tree structure , annotated tree diagram thing . so , and , in terms of like the conventions that i ' m a that i ' ve adopted , it there 's no conflict . and he was , very interested . and , " , and how 'd you handle this ? " and i said , " , this way " and and and we had a really conversation . , ok , now i also wanted to say in a different direction is , brian kingsbury . so , i corresponded briefly with him . i , c i he still has an account here . i told him he could ssh on and use multi - trans , and have a look at the already done , transcription . and he and he did . and what he said was that , what they 'll be providing is will not be as fine grained in terms of the time information . and , that 's , i need to get back to him and , , explore that a little bit more and see what they 'll be giving us in specific , phd e: the the folks that they 're , subcontracting out the transcription to , are they like court reporters postdoc g: , i get the sense they 're like that . like it 's like a pool of somewhat , secretarial i do n't think that they 're court reporters . i do n't think they have the special keyboards and that type of training . i get the sense they 're more secretarial . and that , , what they 're doing is giving them phd e: , so they 're hiring them , they 're coming . it 's not a service they send the tapes out to . grad h: they do send it out but my understanding is that 's all this company does is transcriptions for ibm for their speech product . postdoc g: and and what they 're doing is brian himself downloaded so , adam sent them a cd and brian himself downloaded , cuz , , we wanted to have it so that they were in familiar f terms with what they wanted to do . he downloaded from the cd onto audio tapes . and he did it one channel per audio tape . so each of these people is transcribing from one channel . and then what he 's going to do is check it , a before they go be beyond the first one . check it and , adjust it , and all that . grad h: , but that 's ok , because , you 'll do all them and then combine them . postdoc g: i have t i , i it would be difficult to do it that way . i really phd a: it 's hard just playing the , just having played the individual files . and , i know you . i your voice sounds like . i ' m familiar with , it 's pretty hard to follow , especially phd a: there are a lot of words that are so reduced phonetically that make sense when what the person was saying before . grad h: and the answer is we do n't actually know the answer because we have n't tried both ways . postdoc g: unless there 's a huge disparity in terms of the volume on the mix . in which case , they would n't be able to catch anything except the prominent channel , then they 'll switch between . phd a: actually , are th so are they giving any time markings ? in other words , if postdoc g: , i have to ask him . and that 's my email to him . that needs to be forthcoming . postdoc g: but but the , i did want to say that it 's hard to follow one channel of a conversation even if the people , and if you 're dealing furthermore with highly abstract network concepts you ' ve never heard of so , one of these people was transcribing the , networks group talk and she said , " i do n't really a lot of these abbreviations are , " but put them in parentheses cuz that 's the convention and cuz , if you phd e: given all of the effort that is going on here in transcribing why do we have i b m doing it ? why not just do it all ourselves ? professor b: , it 's historical . , some point ago we thought that , it " boy , we 'd really have to ramp up to do that " , professor b: , like we just did , and , here 's , a , collaborating institution that 's volunteered to do it . so , that was a contribution they could make . in terms of time , money , ? and it still might be a good thing professor b: we 'll see . , th , they ' ve proceeded along a bit . let 's see what comes out of it , and , , have some more discussions with them . postdoc g: it 's very a real benefit having brian involved because of his knowledge of what the how the data need to be used and so what 's useful to have in the format . grad h: so , liz , with the sri recognizer , can it make use of some time marks ? phd a: and actually i should say this is what don has b , he 's already been really helpful in , chopping up these so so first of all you , for the sri front - end , we really need to chop things up into pieces that are f not too huge . , but second of all , in general because some of these channels , i 'd say , like , i , at least half of them probably on average are g are ha are have a lot of cross - ta , some of the segments have a lot of cross - talk . , it 's good to get short segments if you 're gon na do recognition , especially forced alignment . , don has been taking a first stab actually using jane 's first the fir the meeting that jane transcribed which we did have some problems with , and thilo , told me why this was , but that people were switching microphones around in the very beginning , the sri re phd c: no , th . no . they they were not switching them but what they were adjusting them , phd a: so we have to normalize the front - end and , and have these small segments . phd a: so we ' ve taken that and chopped it into pieces based always on your , cuts that you made on the mixed signal . and so that every speaker has the same cuts . and if they have speech in it we run it through . and if they do n't have speech in it we do n't run it through . and we base that knowledge on the transcription . phd a: , the problem is if we have no time marks , then for forced alignment we actually where , in the signal the transcriber heard that word . and so phd a: , if it 's a whole conversation and we get a long , , par paragraph of talk , phd e: you would need to like a forced alignment before you did the chopping , right ? phd a: no , we used the fact that so when jane transcribes them the way she has transcribers doing this , whether it 's with the pre - segmentation or not , phd a: they have a chunk and then they transcribes the words in the chunk . and maybe they choose the chunk or now they use a pre - segmentation and then correct it if necessary . but there 's first a chunk and then a transcription . then a chunk , then a transcription . that 's great , cuz the recognizer can phd a: right , and it helps that it 's made based on heuristics and human ear . phd a: th - but there 's going to be a real problem , even if we chop up based on speech silence these , the transcripts from i b m , we do n't actually know where the words were , which segment they belonged to . so that 's what i ' m worried about right now . phd a: even if you do it on something really long you need to know you can always chop it up but you need to have a reference of which words went with which , chop . postdoc g: now was n't that one of the proposals was that ibm was going to do an initial forced alignment , after they professor b: , i ' m they will and so we have to have a dialogue with them about it . , it sounds like liz has some concerns phd a: maybe they have some , maybe actually there is some , even if they 're not fine grained , maybe the transcribers , i , maybe it 's saved out in pieces . that would help . but , it 's just an unknown right now . phd a: right . but the it is true that the segments i have n't tried the segments that thilo gave you but the segments that in your first meeting are great . , that 's a good length . postdoc g: , i was thinking it would be fun to , if you would n't mind , to give us a pre - segmentation . postdoc g: , maybe you have one already of that first m of the meeting that , the first transcribed meeting , the one that i transcribed . phd c: but that 's the one where we 're , trai training on , so that 's a little bit phd a: and actually as you get transcripts just , for new meetings , we can try , the more data we have to try the alignments on , the better . so it 'd be good for just to know as transcriptions are coming through the pipeline from the transcribers , just to we 're playing around with , parameters f on the recognizer , cuz that would be helpful . especially as you get , en more voices . postdoc g: , liz and i spoke d w at some length on tuesday and i was planning to do just a preliminary look over of the two that are finished and then give them to you . professor b: that 's great . i the other thing , i ca n't remember if we discussed this in the meeting but , i know you and i talked about this a little bit , there was an issue of , suppose we get in the , i it 's enviable position although maybe it 's just saying where the weak link is in the chain , where we , we have all the data transcribed and we have these transcribers and we were we 're the we 're still a bit slow on feeding at that point we ' ve caught up and the , the weak link is recording meetings . two questions come , is what how do we , it 's not really a problem at the moment cuz we have n't reached that point but how do we step out the recorded meetings ? and the other one is , , is there some good use that we can make of the transcribers to do other things ? so , i ca n't remember how much we talked about this in this meeting but there was postdoc g: and there is one use that also we discussed which was when , dave finishes the and maybe it 's already finished the modification to multi - trans which will allow fine grained encoding of overlaps . , then it would be very these people would be very good to shift over to finer grain encoding of overlaps . it 's just a matter of , providing so if right now you have two overlapping segments in the same time bin , with the improvement in the database in the , , in the interface , it 'd be possible to , , just do a click and drag thing , and get the , the specific place of each of those , the time tag associated with the beginning and end of each segment . professor b: right , so we talking about three level three things . one one was , we had s had some discussion in the past about some very high level labelings , professor b: types of overlaps , and that someone could do . second was , somewhat lower level just doing these more precise timings . and the third one is , just a completely wild hair brained idea that i have which is that , if we have time and people are able to do it , to take some subset of the data and do some very fine grained analysis of the speech . , marking in some overlapping potentially overlapping fashion , the value of , ar articulatory features . , just say , ok , it 's voiced from here to here , there 's it 's nasal from here to here , and . , as opposed to doing phonetic , phonemic and the phonetic analysis , and , assuming , articulatory feature values for those things . , that 's extremely time - consuming . postdoc g: also if you 're dealing with consonants that would be easier than vowels , would n't it ? , i would think that , being able to code that there 's a fricative extending from here to here would be a lot easier than classifying precisely which vowel that was . professor b: , but also it 's just the issue that when you look at the u w u when you look at switchboard very close up there are places where whether it 's a consonant or a vowel you still have trouble calling it a particular phone at that point professor b: now i ' m suggesting articulatory features . maybe there 's even a better way to do it but that 's , a traditional way of describing these things , and , actually this might be a g neat thing to talk to professor b: , it 's still some categories but something that allows for overlapping change of these things and then this would give some more ground work for people who were building statistical models that allowed for overlapping changes , different timing changes as opposed to just " click , you 're now in this state , which corresponds to this speech sound " and so on . professor b: , something like that . , actually if we get into that it might be good to , haul john ohala into this and ask his views on it . phd a: like so that you can do far field studies of those gestures or , or is it because you think there 's a different actual production in meetings that people use ? or ? professor b: no , i think it 's for that purpose i ' m just viewing meetings as being a neat way to get people talking naturally . and then you have i and then it 's natural in all senses , professor b: in the sense that you have microphones that are at a distance that , one might have , and you have the close mikes , and you have people talking naturally . and the overlap is just indicative of the fact that people are talking naturally , right ? so so that given that it 's that corpus , if it 's gon na be a very useful corpus , if you say w ok , we ' ve limited the use by some of our , censored choices , we do n't have the video , we do n't and , but there 's a lot of use that we could make of it by expanding the annotation choices . and , most of the things we ' ve talked about have been fairly high level , and being a bottom - up person maybe we 'd , do some of the others . postdoc g: it 's a balance . that would be really to offer those things with that wide range . professor b: , people did n't , people have made a lot of use of timit and , w due to its markings , and then the switchboard transcription thing , has been very useful for a lot of people . phd a: that 's true . i wanted to , make a pitch for trying to collect more meetings . i actually i talked to chuck fillmore and they ' ve what , vehemently said no before but this time he was n't vehement and he said , " , liz , come to the meeting tomorrow and try to convince people " . so i ' m gon na try . go to their meeting tomorrow and see if we can try , to convince them phd a: because they have and they have very interesting meetings from the point of view of a very different type of talk than we have here phd a: , yes and in terms of the fact that they 're describing abstract things and , just dialogue - wise , so i 'll try . and then the other thing is , i if this is useful , but i asked lila if maybe go around and talk to the different departments in this building to see if there 's any groups that , for a free lunch , if we can still offer that , might be willing grad h: but the problem is so much of their is confidential . it would be very hard for them . postdoc g: also it does seem like it takes us way out of the demographic . , it seems like we had this idea before of having like linguistics students brought down for free lunches phd a: right , and then we could also we might try advertising again because it 'd be good if we can get a few different non - internal types of meetings and just also more data . phd a: so i actually wrote to him and he answered , " great , that sounds really interesting " . but i never heard back because we did n't actually advertise openly . we a w i told i d asked him privately . , and it is a little bit of a trek for campus folks . grad h: but , it would be if we got someone other than me who knew how to set it up and could do the recording so u i did n't have to do it each time . phd a: and i ' m willing to try to learn . , i ' m i would do my best . , the other thing is that there was a number of things at the transcription side that , transcribers can do , like dialogue act tagging , phd a: disfluency tagging , things that are in the speech that are actually something we 're y working on for language modeling . and mari 's also interested in it , andreas as . so if you wanna process a utterance and the first thing they say is , " , and that " is coded as some interrupt u tag . , and things like that , th phd a: great . so a lot of this there 's a second pass and i do n't really would exist in it . but there 's definitely a second pass worth doing to maybe encode some kinds of , is it a question or not , or , that maybe these transcribers could do . postdoc g: , i wanted to whi while we 're , so , to return just briefly to this question of more meeting data , i have two questions . one of them is , jerry feldman 's group , they , are they i know that they recorded one meeting . are they willing ? professor b: there 's we should go beyond , icsi but , there 's a lot of happening at icsi that we 're not getting now that we could . professor b: no , no . no . so th there was the thing in fillmore 's group but even there he had n't what he 'd said " no " to was for the main meeting . but they have several smaller meetings a week , and , the notion was raised before that could happen . and it just , it just did n't come together phd e: , and the other thing too is when they originally said " no " they did n't know about this post - editing capability thing . professor b: , so there 's possibilities there . jerry 's group , yes . , there 's , the networks group , i do n't do they still meeting regularly or ? grad h: he was my contact , so need to find out who 's running it now . phd a: and this it sounds bizarre but , i 'd really like to look at to get some meetings where there 's a little bit of heated discussion , like ar arguments and or emotion , and things like that . and so i was thinking if there 's any like berkeley political groups . , that 'd be perfect . some group , " yes , we must " phd a: , ok . no , but maybe stu student , groups or , film - makers , or som something a little bit colorful . professor b: . , th there 's a problem there in terms of , the commercial value of st , postdoc g: there is this problem though , that if we give them the chance to excise later we e might end up with like five minutes out of a f of m one hour phd a: but just something with some more variation in prosodic contours and would be neat . so if anyone has ideas , i ' m willing to do the leg work to go try to talk to people but i do n't really know which groups are worth pursuing . postdoc g: and i had one other aspect of this which is , , jonathan fiscus expressed primar y a major interest in having meetings which were all english speakers . now he was n't trying to shape us in terms of what we gather but that 's what he wanted me to show him . so i ' m giving him our , our initial meeting because he asked for all english . and we do n't have a lot of all english meetings right now . phd a: i was thinking , knowing the , n national institute of standards , it is all professor b: i remember a study that bbn did where they trained on this was in wall street journal days , they trained on american english and then they tested on , different native speakers from different areas . and , , the worst match was people whose native tongue was mandarin chinese . the second worst was british english . professor b: it was swiss w , so it 's so , if he 's thinking in terms of recognition technology i he would probably want , american english , postdoc g: that the feldman 's meetings tend to be more that way , are n't they ? , i feel like they have grad h: and maybe there are a few of with us where it was , dan was n't there and before jose started coming , and professor b: so , what about people who involved in some artistic endeavor ? , film - making like that . phd a: something where there is actually discussion where there 's no right or wrong answer but it 's a matter of opinion thing . , anyway , if you have ideas phd a: and that has a chance to give us some very interesting fun data . if anyone has ideas , if any groups that are m , grad h: the business school . , the business school might be good . i actually spoke with some students up there and they expressed willingness back when they thought they would be doing more with speech . grad h: but when they lost interest in speech they also stopped answering my email about other , so . grad f: i heard that at cal tech they have a special room someone said that they had a special room to get all your frustrations out that you can go to and like throw things and break things . professor b: , but we do n't want them to throw the far field mikes is the thing . grad h: there was a dorm room at tech that , someone had coated the walls and the ceiling , and , the floor with mattresses . the entire room . professor b: i had as my fourth thing here processing of wave forms . what did we mean by that ? remember @ ? grad h: , liz wanted to talk about methods of improving accuracy by doing pre - processing . phd a: but that , it would be helpful if stay in the loop somehow with , people who are doing any post - processing , whether it 's to separate speakers or to improve the signal - to - noise ratio , or both , that we can try out as we 're running recognition . , so , i is that who else is work i dan ellis and you professor b: he 's interested we 're look starting to look at some echo cancellation things . which professor b: t no , so no , i w wha what you want when you 're saying improving the wave form you want the close talking microphone to be better . professor b: and the question is to w to what extent is it getting hurt by , by any room acoustics or is it just , given that it 's close it 's not a problem ? phd a: it does n't seem like big room acoustics problems to my ear but i ' m not an expert . it seems like a problem with cross - talk . phd a: all i meant is just that as this pipeline of research is going on we 're also experimenting with different asr , techniques . and so it 'd be w good to know about it . phd e: so the problem is like , on the microphone of somebody who 's not talking they 're picking up signals from other people and that 's causing problems ? phd a: r right , although if they 're not talking , using the inhouse transcriptions , were o k because the t no one transcribed any words there and we throw it out . but if they 're talking and they 're not talking the whole time , so you get some speech and then a " - " , and some more speech , so that whole thing is one chunk . and the person in the middle who said only a little bit is picking up the speech around it , that 's where it 's a big problem . postdoc g: , this does like seem like it would relate to some of what jose 's been working on as , the encoding of the and and he also , he was postdoc g: i was t i was trying to remember , you have this interface where you i you ha you showed us one time on your laptop that you had different visual displays as speech and nonspeech events . phd d: c . may i only display the different colors for the different situation . but , for me and for my problems , is enough . because , it 's possible , in a simp sample view , to , nnn , to compare with c with the segment , the assessment what happened with the different parameters . and only with a different bands of color for the , few situation , i consider for acoustic event is enough to @ . i see that , you are considering now , a very sophisticated , ehm , @ set of , graphic s , ehm , si symbols to transcribe . no ? because , before , you are talking about the possibility to include in the transcriber program , a set of symbols , of graphic symbol to t to mark the different situations during the transcription postdoc g: , you 're saying so , symbols for differences between laugh , and sigh , and slam the door and ? postdoc g: , i would n't say symbols so much . the the main change that i see in the interface is just that we 'll be able to more finely c , time things . but i also st there was another aspect of your work that i was thinking about when i was talking to you which is that it sounded to me , liz , as though you and , maybe i did n't q understand this , but it sounded to me as though part of the analysis that you 're doing involves taking segments which are of a particular type and putting them together . and th so if you have like a p a s , speech from one speaker , then you cut out the part that 's not that speaker , and you combine segments from that same speaker to and run them through the recognizer . is that right ? phd a: we try to find as close of start and end time of as we can to the speech from an individual speaker , because then we 're more guaranteed that the recognizer will for the forced alignment which is just to give us the time boundaries , because from those time boundaries then the plan is to compute prosodic features . and the more space you have that is n't the thing you 're trying to align the more errors we have . , so , that it would help to have either pre - processing of a signal that creates very good signal - to - noise ratio , phd a: which i how possible this is for the lapel , or to have very to have closer , time , synch times , around the speech that gets transcribed in it , or both . and it 's just a open world right now of exploring that . so wanted to see , on the transcribing end from here things look good . , the ibm one is more it 's an open question right now . and then the issue of like global processing of some signal and then , before we chop it up is yet another way we can improve things in that . phd e: what about increasing the flexibility of the alignment ? do you remember that thing that michael finka did ? that experiment he did a while back ? phd a: right . you can , the problem is just that the acoustic when the signal - to - noise ratio is too low , you 'll get , a an alignment with the wrong duration pattern or it phd e: , so that 's the problem , is the signal - to - noise ratio . phd a: . it 's not the fact that you have like , what he did is allow you to have , words that were in another segment move over to the at the edges of segmentations . phd a: right , things near the boundaries where if you got your alignment wrong cuz what they had done there is align and then chop . , and this problem is a little bit j more global . it 's that there are problems even in inside the alignments , because of the fact that there 's enough acoustic signal there t for the recognizer to eat , as part of a word . and it tends to do that . s so , but we probably will have to do something like that in addition . anyway . so , bottom line is just i wanted to make be aware of whoever 's working on these signal - processing techniques for , detecting energies , because that 'll really help us . professor b: o k , tea has started out there i suggest we c run through our digits and , ok , we 're done . ###summary: the berkeley meeting recorder group discussed recording equipment and setup issues , recent developments in the transcription effort , other potential types of tagging to be assigned to transcribers , and the post-processing of waveforms. the discussion was largely focused on efforts to facilitate transcriptions , including the improvement of strategies for transcribing overlapping speech , and achieving greater uniformity in the type of equipment used during recordings and the manner in which recording devices are worn by speakers. to achieve greater uniformity in across-speaker recording conditions , the group decided to purchase three additional head-mounted microphones. future work will include recording more varied meeting data from non-icsi discussion groups. it was proposed by speaker fe016 that better communication be established between researchers involved in post-processing of the waveform and asr. use of dissimilar microphones adds an extra , unwanted variable to individual speaker recordings. similarly , differences in the type of recording equipment used and the manner in which microphones are worn by speakers causes problems for the transcription effort. setting up a microphone array and performing video recordings ( in a possible collaboration with nist ) are problematic due to the types of changes in infrastructure they require. ibm's single-channel approach to transcriptions may pose problems for the post-processing of waveforms and forced alignments , as the group foresees difficulties in referencing chopped segments back to the original times/locations from which they were extracted. another post-processing problem involves cross-talk , and , in particular , situations in which a speaker whose contributions to the discussion are relatively sparse but whose microphone picks up signals from the other speakers. modifications are being made to multi-trans to enable tight time markings at the boundaries of overlapping speech segments , and facilitate the transcription of such segments. pre-segmentation continues to be very beneficial to the transcription effort. work by speaker mn014 is in progress to compare the energies of different channels for detecting speech/non-speech portions , facilitating transcriptions,and potentially providing speaker identification information. efforts are being developed to create a cross-correlation setup linking recorded data with a map of where individual speakers were seated. the transcriber pool has been performing within the expected range of work completed per the amount of time spent transcribing. ibm has a team of people employed to transcribe meeting data , and who are transcribing single versus multiple channels. the group discussed the potential for assigning additional tasks to icsi's transcriber pool , including tagging more fine-grained acoustic information , and discourse and disfluency tagging.
12
grad b: yes . whew ! i almost forgot about the meeting . i woke up twenty minutes ago , thinking , what did i forget ? grad d: ok . so the news for me is a , my forthcoming travel plans in two weeks from today ? more or less ? i 'll be off to sicily and germany for a couple , three days . grad d: ok , i ' m flying to sicily to drop off simon there with his grandparents . and then i ' m flying to germany t to go to a moku - treffen which is the meeting of all the module - responsible people in smartkom , grad d: and , represent ici and myself i there . and . that 's the mmm actual reason . and then i ' m also going up to eml for a day , and then i ' m going to meet the very big boss , wolfgang walster , in saarbruecken and the system system integration people in kaiserslautern and then i ' m flying back via sicily pick up my son come back here on the fourth of july . grad d: and i ' m all the people at the airport will be happy to work on that day . grad b: . then the , it 's not a big deal . once you get to the united states it 'll be a problem , but grad d: and , that 's that bit of news , and the other bit of news is we had , i was visited by my german project manager who a , did like what we did what we 're doing here , and b , is planning to come here either three weeks in july or three weeks in august , to actually work . grad d: with us . and we sat around and we talked and he came up we came up with a pretty strange idea . and that 's what i ' m gon na lay on you now . and , maybe it might be ultimately the most interesting thing for eva because she has been known to complain about the fact that the we do here is not weird enough . grad d: imagine if you will , that we have a system that does all that understanding that we want it to do based on utterances . grad d: it should be possible to make that system produce questions . so if you have the knowledge of how to interpret " where is x ? " under given conditions , situational , user , discourse and ontological conditions , you should also be able to make that same system ask " where is x ? " grad d: in a sper certain way , based on certain intentions . so in instead of just being able to observe phenomenon , and , the intention we might be able just to give it an intention , and make it produce an utterance . grad b: , like in ai they generally do the take in , and then they also do the generation phase , like nancy 's thing . or , you remember , in the hand thing in one - eighty - two , like not only was it able to recognize but it was also to generate based upon situations . you mean that thing ? ok . grad d: and once you ' ve done that what we can do is have the system ask itself . and answer , understand the answer , ask something else , and enter a dialogue with itself . so the ba basic the same idea as having two chess computers play against each other . grad d: you c if you want , you can have two parallel machines , asking each other . what would that give us ? would a be something completely weird and strange , and b , i if you look the factors , we will never observe people let 's say , in wheelchairs under , in under all conditions , grad d: , when they say " x " , and there is a ride at the goal , and the parking is good , we can never collect enough data . it 's it 's not possible . grad d: but maybe one could do some learning . if you get the system to speak to itself , you may find n break downs and errors and you may be able to learn . and make it more robust , maybe learn new things . so there 's no end of potential things one could get out of it , if that works . and he would like to actually work on that with us . grad d: so , i w see the generation bit , making the system generate something , is should n't be too hard . grad b: do n't think we 're probably a year away from getting the system to understand things . grad d: . , if we can get it to understand one thing , like our " where is " run through we can also , maybe , e make it say , or ask " where is x ? " or not . grad e: mmm , i . e i ' m have the impression that getting it to say the right thing in the right circumstances is much more difficult than getting it to understand something given the circumstances and so on , just cuz it 's harder to learn to speak correctly in a foreign language , rather than learning to understand it . right ? just the fact that we 'll get that getting it to understand one construction does n't mean that it will n always know exactly when it 's correct to use that construction . right ? grad d: it 's it 's , i ' ve done generation and language production research for fo four and a half years . and so it 's you 're right , it 's not the same as the understanding . it 's in some ways easier and some ways harder . nuh ? but , it 'd be fun to look at it , or into that question . grad b: the basic idea i would be to give allow the system to have intentions , ? cuz that 's what needs to be added to the system for it . grad d: , look at th eee , even think even what it would be the prior intention . so let 's , let 's say we have this grad d: no . let 's we have to we have some top - down processing , given certain setting . ok , now we change nothing , and just say ask something . right ? what would it ask ? grad b: unless it was in a situation . we 'd have to set up a situation where , it did n't know where something was and it wanted to go there . grad b: which means that we 'd need to set up an intention inside of the system . right ? which is , " i where something is and i need to go there " . grad d: s it 's i i know it 's strange , but look at it look at our bayes - net . if we do n't have let 's assume we do n't have any input from the language . right ? so there 's also nothing we could query the ontology , but we have a certain user setting . if you just ask , what is the likelihood of that person wanting to enter some something , it 'll give you an answer . that 's just how they are . and so , @ whatever that is , it 's the generic default intention . that it would find out . which is , wanting to know where something is , maybe nnn and wanting i what it 's gon na be , but there 's gon na be something that grad e: you 're not gon na are you gon na get a variety of intentions out of that then ? , you 're just talking about like given this user , what 's the th what is it what is that user most likely to want to do ? grad d: you can observe some user and context and ask , what 's the posterior probabilities of all of our decision nodes . grad d: you could even say , " let 's take all the priors , let 's observe nothing " , and query all the posterior probabilities . it - it 's gon na tell us something . grad d: and yes . and come up with posterior probabilities for all the values of the decision nodes . which , if we have an algorithm that filters out whatever the best or the most consistent answer out of that , will give us the intention ex nihilo . and that is exactly what would happen if we ask it to produce an utterance , it would be b based on that extension , ex nihilo , which we what it is , but it 's there . so we would n't even have to t to kick start it by giving it a certain intention or observing anything on the decision node . and whatever that maybe that would lead to " what is the castle ? " , grad b: i what i ' m afraid of is if we do n't , set up a situation , we 'll just get a bunch of garbage out , like , everything 's exactly thirty percent . grad d: . so what we actually then need to do is write a little script that changes all the settings , go goes through all the permutations , which is we did a did n't we calculate that once ? grad b: cuz i went and looked at it cuz i was thinking , that could not be right , and it would it was on the order of twenty output nodes and something like twenty grad b: so to test every output node , would at least let 's see , so it would be two to the thirty for every output node ? which is very th very large . grad d: ! that 's n that 's nothing for those neural guys . , they train for millions and millions of epochs . grad b: , i was gon na take a drink of my water . i ' m talking about billions and a number two to the thirty is like a bhaskara said , we had calculated out and bhaskara believes that it 's larger than the number of particles in the universe . and if i grad e: i if that 's right or not . th - that 's big . that 's just that 's it 's a billion , right ? grad b: two to the thirty ? , two to the thirty is a billion , but if we have to do it two to the twenty times , then that 's a very large number . grad b: cuz you have to query the node , for every a , or query the net two to the twenty times . grad b: that 's @ that 's big . actually ! we calculated a different number before . how did we do that ? grad e: , it 's g anyway , that given all of these different factors , it 's e it 's still going to be impossible to run through all of the possible situations or whatever . grad b: if it takes us a second to do , for each one , and let 's say it 's twenty billion , then that 's twenty billion seconds , which is eva , do the math . grad d: . so , it be it 's an idea that one could n run past , what 's that guy 's name ? he - he 's usually here . tsk . j jer - jerj grad d: so this is just an idea that 's floating around and we 'll see what happens . and , what other news do i have ? we fixed some more things from the smartkom system , but that 's not really of general interest , ! questions , i 'll ask eva about the e bayes and she 's working on that . how is the generation xml thing ? grad d: ok . no need to do it today or tomorrow even . do it next week or grad b: i ' m gon na finish it today , hopefully . i wanna do one of those things where i stay here . cuz , if i go home , i ca n't finish it . i ' ve tried about five times so far , where i work for a while and then i ' m like , i ' m hungry . so i go home , and then grad b: either that or to myself , work at home . and then i try to work at home , but i fail miserably . like i ended up at blakes last night . grad b: no . i almost got into a brawl . but i did not finish the , but i ' ve been looking into it . i th @ it 's not like it 's a blank slate . i found everything that i need and stu and , grad b: at the b furthermore , i told jerry that i was gon na finish it before he got back . so . grad b: , we think we 'll see him definitely on tuesday for the next or , no , . the meetings are on thursday . grad b: i was thinking about that . i will try to work on the smartkom and i 'll if finish it today , i 'll help you with that tomorrow , if you work on it ? i do n't have a problem with us working on it though ? and it grad b: we just it would n't hurt to write up a paper , cuz then , i was talking with nancy and nancy said , you whether you have a paper to write up until you write it up . and since jerry 's coming back , we can run it by him too . grad e: , i do n't have much experience with , conference papers for compu in the computer science realm , and so when i looked at what you had , which was a complete submission , said what just i did n't really to do with it , like , this is the basic outline of the system or whatever , or here 's an idea , right ? that 's what that paper was , here 's one possible thing you could do , short , eight pages , and what you have in mind for expanding . like i 'd i what i did n't do is go to the web site of the conference and look at what they 're looking for or whatever . grad d: , it 's more it 's both , right ? it 's it 's t cognitive , neural , psycho , linguistic , but all for the sake of doing computer science . so it 's cognitive , psycho , neural , plausibly motivated , architectures of natural language processing . so it seems pretty interdisciplinary , and , w the keynote speaker is tomasello and blah - blah , grad d: so , w the question is what could we actually do and keep a straight face while doing it . grad d: , you can say we have done a little bit and that 's this , and the rest is position paper , we wanna also do that . which is not too good . might be more interesting to do something like let 's assume , we 're right , we have as jerry calls it , a delusion of adequacy , and take a " where is x " sentence , and say , " we will just talk about this , and how we cognitively , neurally , psycho - linguistically , construction grammar - ally , motivated , envision , understanding that " . grad d: so we can actually show how we parse it . that should be able to we should be able to come up with , a parse . it 's on , just put it on . grad a: was he supposed to harass me ? , he just told me that you came looking for me . grad b: so put that those things over your ears like that . see the p how the plastic things ar arch out like that ? there we go . grad a: i ' m . i ' m , these are all the same . ok ! th this is not very on target . grad d: and brought forth the idea that we take a sentence , " where is the powder - tower " , and we p pretend to parse it , we pretend to understand it , and we write about it . grad d: it 's the whatever , architectures , where there is this conference , it 's the seventh already international conference , on neu neurally , cognitively , motivated , architectures of natural language processing . grad a: is it normally like , dialogue systems , or , other nlp - ish things ? grad d: it would be to go write two papers actually . and one from your perspective , and one from our peve per grad a: , th that 's the kinda thing that maybe like , the general con like ntl - ish like , whatever , the previous simulation based pers maybe you 're talking about the same thing . a general paper about the approach here would probably be appropriate . and good to do at some point anyway . grad d: , i also think that if we write about what we have done in the past six months , we could craft a little paper that if it gets rejected , which could happen , does n't hurt because it 's something we grad d: having it is a good thing . it 's a exercise , it 's i usually enjoy writing papers . it 's not i do n't re regard it as a painful thing . grad d: and , we should all do more for our publication lists . and . it just never hurts . and keith and - or johno will go , probably . grad a: , it 's in germany . , ok . i s i see . tomasello 's already in germany anyway , so makes sense . ok . grad a: . ok . so , is the what are you just talking about , the details of how to do it , or whether to do it , or what it would be ? grad d: what is our what 's our take home message . what what do we actually because , it i do n't like papers where you just talk about what you plan to do . , it 's obvious that we ca n't do any evaluation , and have no , we ca n't write an acl type paper where we say , " ok , we ' ve done this and now we 're whatever percentage better than everybody else " . it 's far too early for that . but , we can tell them what we think . that 's never hurts to try . maybe even that 's maybe the time to introduce the new formalism that you guys have cooked up . grad e: my gosh . , you were we were talking about something which was much more like ten . grad d: no that 's actually a problem . it 's difficu it 's more difficult to write on four pages than on eight . grad a: it 's and it 's also difficult to even if you had a lot of substance , it 's hard to demonstrate that in four pages , . grad d: i maybe it 's just four thousand lines . i do i do n't they do n't want any they do n't have a tex f style @ guide . grad d: they just want ascii . pure ascii lines , whatever . why , for whatever reason , grad b: i would say that 's closer to six pages actually . four thousand lines of ascii ? grad e: four thousand lines . is n't a is n't it about fifty s fifty five , sixty lines to a page ? grad d: i d do n't quote me on this . this is numbers i have from looking o grad d: let 's let 's wh what should we , discuss this over tea and all of us look at the web ? , i ca n't . i ' m wizarding today . grad b: i got an email . , keith is comfortable with us calling him " keith " . grad d: w and a week of business in germany . i should mention that for you . and otherwise you have n't missed much , except for a really weird idea , but you 'll hear about that soon enough . grad a: the idea that you and i already know about ? that you already told me ? not that grad d: no , no . , that is something for the rest of the gang to g grad d: it 's a presumably one of the watergate codes they anyways , th , do n't make any plans for spring break next year . that 's grad d: that 's the other thing . we 're gon na do an int edu internal workshop in sicily . grad a: i know about that part . i know about the almond trees and . not joking . name a vegetable , ok . , kiwi ? grad a: . ok , but i was trying to find something that he did n't grow on his farm . grad e: , we 're gon na have an example case , right ? i m to like this " where is " case , . grad d: , maybe you have it would be the paper ha would have , in my vision , a flow if we could say , here is th the th here is parsing if you wanna do it c right , here is understanding if you wanna do it right , and without going into technical grad a: but then in the end we 're not doing like those things right yet , right ? would that be clear in the paper or not ? grad d: that would be clear , we would i mailed around a little paper that i have grad a: it would be like , this is the idea . , i did n't get that , grad d: see this , if you 're not around , and do n't partake in the discussions , and you do n't get any email , grad d: su so we could say this is what 's state of the art today . nuh ? and say , this is bad . nuh ? and then we can say , what we do is this . grad b: do n't you need to reduce it if it 's a or reduce it , if it 's a cognitive neuro grad a: , you do n't have t the conference may be cognitive neural , does n't mean that every paper has to be both . like , nlp cognitive neural . grad d: , and you can just point to the literature , you can say that construction - based grad a: so i so this paper would n't particularly deal with that side although it could reference the ntl - ish , like , approach . the fact that the methods here are all compatible with or designed to be compatible with whatever , neurological neuro - biol su . , i four pages you could definitely it 's definitely possible to do it . it 's just it 'd just be small . like introducing the formalism might be not really possible in detail , but you can use an example of it . grad e: , l looking at , looking at that paper that you had , like , you did n't really explain in detail what was going on in the xml cases or whatever you just sorta said , here 's the general idea , some gets put in there . , hopefully you can say something like constituents tells you what the construction is made out of , without going into this intense detail . grad a: , . so it be like using the formalism rather than , introducing it per se . so . grad e: give them the one paragraph whirlwind tour of w what this is for , and . grad d: so this will be documenting what we think , and documenting what we have in terms of the bayes - net . and since there 's never a bad idea to document things , no ? grad d: that would be my , we we should sketch out the details maybe tomorrow afternoon - ish , if everyone is around . you probably would n't be part of it . grad d: maybe you want ? think about it . you may ruin your career forever , if you appear . grad d: the , other thing , we actually have we made any progress on what we decided , last week ? i ' m you read the transcript of last week 's meeting in red so sh so you 're up to dated caught up . grad d: we decided t that we 're gon na take a " where is something " question , and pretend we have parsed it , and see what we could possibly hope to observe on the discourse side . grad b: remember i came in and i started asking you about how we were sor going to sort out the , decision nodes ? grad b: , there was like we needed to or , in my opinion we need to design a bayes another sub - bayes - net , it was whether we would have a bayes - net on the output and on the input , or whether the construction was gon na be in the bayes - net , grad d: should i introduce it as sudo - square ? we have to put this in the paper . if we write it . this is this is my only constraint . the th the sudo - square { nonvocalsound } is , " situation " , " user " , " discourse " , right ? " ontology " . grad d: , also he 's talking about suicide , and that 's not a notion i wanna have evoked . grad e: it sounds too rocking for that . anyway . so , what 's going on here ? so what are what grad d: , these are our , whatever , belief - net decision nodes , and they all contribute to these { nonvocalsound } things down here . grad d: , in the moment it 's a bayes - net . and it has fifty not - yet - specified interfaces . ok . i have taken care that we actually can build little interfaces , { nonvocalsound } to other modules that will tell us whether the user likes these things and , n the or these things , and he whether he 's in a wheelchair or not , grad e: so , what a what are these letters again , situr - situation , user , discourse and grad d: and w i s i irena gurevich is going to be here , end of july . she 's a new linguist working for eml . and what she would like to do is great for us . she would like to take the ent ontolog grad d: think of back at the eva vector , and johno coming up with the idea that if the person discussed the admission fee , in previously , that might be a good indication that , " how do i get to the castle ? " , actually he wants to enter . or , " how do i get to x ? " discussing the admission fee in the previous utterance , is a good indication . we do n't want a hard code , a set of lexemes , or things , that person 's , filter , or search the discourse history . so what would be is that if we encounter concepts that are castle , tower , bank , hotel , we run it through the ontology , and the ontology tells us it has , admission , opening times , it has admission fees , it has this , it has that , and then we make a thesaurus lexicon , look up , and then search dynamically through the , discourse history for occurrences of these things in a given window of utterances . and that might , give us additional input to belief a versus b . or e versus a . grad a: so it 's not just a particular word 's ok , so the you 're looking for a few keys that are cues to , a few specific cues to some intention . grad e: that you that when someone 's talking about a castle , that it 's the thing that people are likely to wanna go into ? or , is it the fact that if there 's an admission fee , then one of the things we know about admission fees is that you pay them in order to go in ? and then the idea of entering is active in the discourse ? and then blah - blah ? grad d: the idea is even more general . the idea is to say , we encounter a certain entity in a utterance . so le let 's look up everything we the ontology gives us about that entity , what it does , what roles it has , what parts , whatever it has . functions . and , then we look in the discourse , whether any of that , or any surface structure corresponding to these roles , functions aaa has ever occurred . and then , the discourse history can t tell us , " , or " no " . and then it 's up for us to decide what to do with it . t so i grad d: so , we may think that if you say , " where is the theater " , whether or not he has talked about tickets before , then we he 's probably wanna go there to see something . or " where is the opera in par - paris ? , lots of people go to the opera to take pictures of it and to look at it , grad d: and lots of people go to attend a performance . and , the discourse can maybe tell us w what 's more likely if we to look for in previous statements . and so we can hard code " for opera , look for tickets , look for this , look for that , or look for mozart , look for thi " but the smarter way is to go via the ontology and dynamically , then look up u . grad e: ok . but you 're still doing look up so that when the person so that when the person says , " where is it ? " then you say , let 's go back and look at other things and then decide , rather than the other possibility which is that all through discourse as they talk about different things like w prior to the " where is it " question they say , " how much does it cost to get in , to see a movie around here " , " where is the closest theater " the that by mentioning admission fees , that just stays active now . that becomes part of like , their current ongoing active conceptual structure . and then , over in your bayes - net or whatever , when the person says " where is it " , you ' ve already got , since they were talking about admission , and that evokes the idea of entering , then when they go and ask " where is it " , then you 're enter node is already active because that 's what the person is thinking about . that 's the cognitive linguistic - y way , grad d: that 's the correct way . so , we have to keep memory of what was the last intention , and how does it fit to this , and what does it tell us , in terms of the what we 're examining . grad d: and furthermore , we can idealize that , people do n't change topics , but they do . but , even th for that , there is a student of ours who 's doing a dialogue act , recognition module . so , maybe , we 're even in a position where we can take your approach , which is much better , as to say how do these pieces grad d: . how how do these pieces fit together ? but , ok , nevertheless . so these are issues but we what we actually decided last week , is to , and this is , again , for your benefit is to , pretend we have observed and parsed an utterance such as " where is the powder - tower " , or " where is the zoo " , and specify , what we think the output , observe , out i input nodes for our bayes - nets for the sub - d , for the discourse bit , should be . so that and i will then come up with the ontology side , bits and pieces , so that we can say , ok we always just look at this utterance . that 's the only utterance we can do , it 's hard coded , like srini , hand parsed , hand crafted , but this is what we hope to be able to observe in general from utterances , and from ontologies , and then we can fiddle with these things to see what it actually produces , in terms of output . so we need to find out what the " where is x " construction will give us in terms of semantics and simspec type things . grad a: just ok . just " where is x " ? or any variants of that . grad d: no ! , look at it this way , i what did we decide . we decided the prototypical " where is x " , where , we do n't really know , does he wanna go there , or just wanna know where it is . grad d: so the difference of " where is the railway station " , versus where " where is greenland " . nuh ? grad e: so , we 're supposed to we 're talking about anything that has the semantics of request for location , right ? actually ? or , anyway , the node in the ultimate , in the bayes - net thing when you 're done , the node that we 're talking about , is one that says " request for location , true " , like that , right ? , and exactly how that gets activated , like whether we want the sentence " how do i get there ? " to activate that node or not , that 's the issue that the linguistic - y side has to deal with , right ? grad d: , but it yea - nnn actually more m more the other way around . we wanted something that represents uncertainty we in terms of going there or just wanting to know where it is , . some generic information . and so this is prototypically @ found in the " where is something " question , surface structure , grad d: which can be p , should be maps to something that activates both . the idea is to grad b: i do n't see unde how we would be able to distinguish between the two intentions just from the g utterance , though . , bef or , before we do n't before we cranked it through the bayes - net . grad b: ok , but then so it 's just a for every construction we have a node in the net , right ? and we turn on that node . grad b: and then given that we know that the construction has these two things , we can set up probabilities we can s define all the tables for ev for those grad d: , it should be so we have i let 's assume we call something like a loc - x node and a path - x node . and what we actually get if we just look at the discourse , " where is x " should activate or should . should be both , whereas maybe " where is x located " , we find from the data , is always just asked when the person wants to know where it is , and " how do i get to " is always asked when the person just wants to know how to get there . so we want to come up with what gets , input , and how inter in case of a " where is " question . so what would the outcome of your parser look like ? and , what other discourse information from the discourse history could we hope to get , squeeze out of that utterance ? so define the input into the bayes - net based on what the utterance , " where is x " , gives us . so definitely have an entity node here which is activated via the ontology , grad d: so " where is x " produces something that is s stands for x , whether it 's castle , bank , restroom , toilet , whatever . and then the ontology will tell us grad a: that it has a location like that ? or th the ontology will tell us where actually it is located ? grad d: no . not . where it is located , we have , a user proximity node here somewhere , grad d: e which tells us how far the user how far away the user is in respect to that entity . grad a: so you 're talking about , the construction involves this entity or refers to this entity , and from the construction also that it is a location is or a thing that can be located . right ? ontology says this thing has a location slot . sh - and that 's the thing that is being that is the content of the question that 's being queried by one interpretation of " where is x " . and another one is , path from current user current location to that location . so is the question it 's just that i ' m not what the is the question , for this particular construction how we specify that 's the information it provides ? or or asked for ? b both sides , right ? grad d: , you do n't need to even do that . it 's just what would be @ observed in that case . grad a: observed when you heard the speaker say " where is x " , or when that 's been parsed ? so these little circles you have by the d ? is that ? grad b: i d i do n't like having characterizing the constructions with location and path , or li characterizing them like that . cuz you do n't it seems like in the general case you would n't know how to characterize them . grad b: or , for when . there could be an interpretation that we do n't have a node for in the it just seems like @ has to have a node for the construction and then let the chips fall where they may . versus , saying , this construction either can mean location or path . and , in this cas and since it can mean either of those things , it would light both of those up . grad d: it will be the same . so r in here we have " i 'll go there " , right ? grad d: and we have our info - on . so in my c my case , this would make this happy , and this would make the go - there happy . what you 're saying is we have a where - x question , where - x node , that makes both happy . that 's what you 're proposing , which is , in my mind just as fine . so w if we have a construction node , " where is x " , it 's gon na both get the po posterior probability that it 's info - on up , grad d: info - on is true - up , and that go - there is true - up , as . which would be exactly analogous to what i ' m proposing is , this makes something here true , and this makes something also something here true , and this makes this true - up , and this makes this true - up as . grad e: i kinda like it better without that extra level of indirection too . with this points to that , and so on because i , it grad d: , because we get tons of constructions . because , mmm people have many ways of asking for the same thing , grad a: so i agree with that . i have a different kinda question , might be related , which is , ok so implicitly everything in edu , we 're always inferring the speaker intent , right ? like , what they want either , the information that they want , or it 's always information that they want probably , of some kind . right ? or i , or what 's something that they grad a: i do n't so , let 's see . so i if the i if th just there 's more s here that 's not shown that you it 's already like part of the system whatever , but , " where is x " , like , the fact that it is , a speech - act , whatever , it is a question . it 's a question that , queries on some particular thing x , and x is that location . there 's , like , a lot of structure in representing that . so that seems different from just having the node " location - x " and that goes into edu , right ? grad d: ok , the next one would be here , just for mood . the next one would be what we can squeeze out of the i , maybe we wanna observe the , the length of the words used , and , or the prosody grad a: , so in some ways in the other parallel set of mo more linguistic meetings we ' ve been talking about possible semantics of some construction . where it was the simulation that 's , according to it , that corresponds to it , and as the as discourse , whatever , conte infor in discourse information , such as the mood , and , other . so , are we looking for a abbreviation of that , that 's tailored to this problem ? cuz that has , , s it 's in progress still it 's in development still , but it definitely has various feature slots , attributes , bindings between things grad d: u that 's exactly r , why i ' m proposing it 's too early to have to think of them of all of these discourse things that one could possibly observe , so let 's just assume grad d: human beings are not allowed to ask anything but " where is x " . this is the only utterance in the world . what could we observe from that ? grad a: not the choices of " where is x " or " how do i get to x " . just " where is x " . grad d: just just " where is x " . and , but , do it in such a way that we know that people can also say , " is the town hall in front of the bank " , so that we need something like a w wh focus . nuh ? should be should be there , that , this the whatever we get from the grad d: where possible . right ? if i if it 's not triggered by our thing , then it 's irrelevant , and it does n't hurt to leave it out for the moment . , but grad a: , it seems like , " where is x " , the fact that it might mean , " tell me how to get to x " , like do y so , would you wanna say that those two are both , like those are the two interpretations , right ? the ones that are location or path . so , you could say that the s construction is a question asking about this location , and then you can additionally infer , if they 're asking about the location , it 's because they wanna go to that place , in which case , the you 're jumping a step and saying , " , i know where it is but i also know how to get they wanna seem they seem to wanna get there so i ' m gon na tell them " . so there 's like structure grad e: right , th this it 's not that this is like semantically ambiguous between these two . grad e: it 's really about this but why would you care about this ? , it 's because you also want to know this , like that right ? grad a: so it 's like you infer the speaker intent , and then infer a plan , a larger plan from that , for which you have the additional information , you 're just being extra helpful . grad d: think , this is just a mental exercise . if you think about , focus on this question , how would you design that ? is it do you feel confident about saying this is part of the language already to detect those plans , and why would anyone care about location , if not , and . or do you actually , this is perfectly legitimate , and i would not have any problems with erasing this and say , that 's all we can activate , based on the utterance out of context . grad d: and then the miracle that we get out the intention , go - there , happens , based on what we know about that entity , about the user , about his various beliefs , goals , desires , blah - blah . grad d: fine . but this is the thing , i propose that we think about , so that we actually end up with , nodes for the discourse and ontology so that we can put them into our bayes - net , never change them , so we all there is " where is x " , and , eva can play around with the observed things , and we can run our better javabayes , and have it produce some output . and for the first time in th in the world , we look at our output , and see whether it 's any good . grad d: , for me this is just a ba matter of curiosity , i wanna would like to look at , what this ad - hoc process of designing a belief - net would actually produce . grad d: if if we ask it where is something . and , maybe it also h enables you to think about certain things more specifically , come up with interesting questions , to which you can find interesting answers . and , additionally it might fit in really nicely with the paper . because if we want an example for the paper , i suggest there it is . so th this might be a opening paragraph for the paper as saying , " people look at kinds of at ambiguities " , and , in the literature there 's " bank " and whatever kinds of garden path phenomenon . and we can say , that 's all nonsense . a , these things are never really ambiguous in discourse , b , do n't ever occur really in discourse , but normal statements that seem completely unambiguous , such as " where is the blah - blah " , actually are terribly complex , and completely ambiguous . and so , what every everybody else has been doing so far in , has been completely nonsensical , and can all go into the wastepaper bin , and the only grad d: overture , but , just not really ok , i ' m eja exaggerating , but that might be , saying " hey " , some is actually complex , if you look at it in the vacuum and ceases to be complex in reality . and some that 's as that 's straightforward in the vacuum , is actually terribly complex in reality . would be , also , bottom - up linguistics , type message . grad d: versus the old top - down school . i ' m running out of time . grad d: at four ten . ok , this is the other bit of news . the subjects today know fey , so she ca n't be here , and do the wizarding . so i ' m gon na do the wizarding and thilo 's gon na do the instructing . also we 're getting a person who just got fired , from her job . a person from oakland who is interested in maybe continuing the wizard bit once fey leaves in august . and , she 's gon na look at it today . which is good news in the sense that if we want to continue , after the thir after july , we can . we could . and that 's also maybe interesting for keith and whoever , if you wanna get some more into the data collection . remember this , we can completely change the set - up any time we want . look at the results we ' ve gotten so far for the first , whatever , fifty some subjects ? grad d: no , we 're approaching twenty now . but , until fey is leaving , we surely will hit the some of the higher numbers . so that 's . can a do more funky . grad e: , i 'll have to look more into that data . is that around ? like , cuz that 's getting posted right away when you get it ? or ? i it has to be transcribed , ? grad d: we have , found someone here who 's hand st hand transcribing the first twelve . first dozen subjects just so we can build a language model for the recognizer . but , so those should be available soon . the first twelve . ch st e grad e: that i looked at the first one and got enough data to keep me going for , probably most of july . so . but , . , a probably not the right way to do it actually . grad d: but you can listen to a y you can listen to all of them from your solaris box . if you want . it 's always fun .
an idea for future work was suggested during the visit of the german project manager: the possibility to use the same system for language generation. having a system able to ask questions could contribute significantly to training the belief-net. setting up certain inputs in the bayes-net would imply certain intentions , which would trigger dialogues. there is potential to make a conference paper out of presenting the current work and the project aspirations within a parsing paradigm. the focus should be the bayes-net , to which all other modules interface. situation , user , discourse and ontology feed into the net to infer user intentions. someone asking where the castle is after having asked about the admission fee , indicates that -given that the castle is open to tourists- they want to go there , as opposed to knowing its whereabouts. it was suggested that they start analysing what the discourse and ontology would give as inputs to the bayes-net by working on simple utterances like "where is x?". with this addition , all input layers of the net would be functioning. although this function would be limited , it would allow for the bayes-net to be tested in its entirety and , henceforth , extended. the possibility of incorporating language generation into the system will have to be discussed further. similarly , as no one could recall some of the points of the conference call , the group will have to meet again and define the exact structure and content of the paper they are going to submit. the bayes-net is going to be the focus of the presentation. in order to complete a functioning prototype of the belief-net , it was decided to start expanding the ontology and discourse nodes by working with a simple construction , like "where is x?". a robust analysis of such a basic utterance will indicate what the limits of the information derived from the construction are , as well as ways to design the whole module and fit other constructions in. the idea to create a language generation module for the system , along with the language understanding , was met with interest , although it was made clear that generation is not just the inverse of understanding. understanding what a construction entails does not mean the system can use the construction in all appropriate circumstances. a dialogue producing system would be useful for training the system further , even though the number of input permutations could render the process computationally unwieldy. regarding the conference paper , it was noted that at this stage they have not completed any big parts of the system and there is no evaluation. similarly , the length of the paper would not allow for presentation of the formalism in detail. the focus would have to be on cognitive motivations of the research , and not on system design , anyway. such motivations also apply to the belief-net: there are various direct or indirect ways to link features of the ontology or discourse with specific intentions. the originating observation behind the whole project is that utterances like "where is x?" are seemingly unambiguous , but , in context , they can acquire much more complex interpretations. the smartkom prototype was in need of de-bugging , which is now on its way. similarly , the work on xml is going to be finished within a day. on the other hand , the data recording has started: almost twenty subjects have already taken part and the transcription of the recordings is running in parallel. meanwhile a new person , who is also a possible replacement for the wizard's task in the data collection , has been hired.
###dialogue: grad b: yes . whew ! i almost forgot about the meeting . i woke up twenty minutes ago , thinking , what did i forget ? grad d: ok . so the news for me is a , my forthcoming travel plans in two weeks from today ? more or less ? i 'll be off to sicily and germany for a couple , three days . grad d: ok , i ' m flying to sicily to drop off simon there with his grandparents . and then i ' m flying to germany t to go to a moku - treffen which is the meeting of all the module - responsible people in smartkom , grad d: and , represent ici and myself i there . and . that 's the mmm actual reason . and then i ' m also going up to eml for a day , and then i ' m going to meet the very big boss , wolfgang walster , in saarbruecken and the system system integration people in kaiserslautern and then i ' m flying back via sicily pick up my son come back here on the fourth of july . grad d: and i ' m all the people at the airport will be happy to work on that day . grad b: . then the , it 's not a big deal . once you get to the united states it 'll be a problem , but grad d: and , that 's that bit of news , and the other bit of news is we had , i was visited by my german project manager who a , did like what we did what we 're doing here , and b , is planning to come here either three weeks in july or three weeks in august , to actually work . grad d: with us . and we sat around and we talked and he came up we came up with a pretty strange idea . and that 's what i ' m gon na lay on you now . and , maybe it might be ultimately the most interesting thing for eva because she has been known to complain about the fact that the we do here is not weird enough . grad d: imagine if you will , that we have a system that does all that understanding that we want it to do based on utterances . grad d: it should be possible to make that system produce questions . so if you have the knowledge of how to interpret " where is x ? " under given conditions , situational , user , discourse and ontological conditions , you should also be able to make that same system ask " where is x ? " grad d: in a sper certain way , based on certain intentions . so in instead of just being able to observe phenomenon , and , the intention we might be able just to give it an intention , and make it produce an utterance . grad b: , like in ai they generally do the take in , and then they also do the generation phase , like nancy 's thing . or , you remember , in the hand thing in one - eighty - two , like not only was it able to recognize but it was also to generate based upon situations . you mean that thing ? ok . grad d: and once you ' ve done that what we can do is have the system ask itself . and answer , understand the answer , ask something else , and enter a dialogue with itself . so the ba basic the same idea as having two chess computers play against each other . grad d: you c if you want , you can have two parallel machines , asking each other . what would that give us ? would a be something completely weird and strange , and b , i if you look the factors , we will never observe people let 's say , in wheelchairs under , in under all conditions , grad d: , when they say " x " , and there is a ride at the goal , and the parking is good , we can never collect enough data . it 's it 's not possible . grad d: but maybe one could do some learning . if you get the system to speak to itself , you may find n break downs and errors and you may be able to learn . and make it more robust , maybe learn new things . so there 's no end of potential things one could get out of it , if that works . and he would like to actually work on that with us . grad d: so , i w see the generation bit , making the system generate something , is should n't be too hard . grad b: do n't think we 're probably a year away from getting the system to understand things . grad d: . , if we can get it to understand one thing , like our " where is " run through we can also , maybe , e make it say , or ask " where is x ? " or not . grad e: mmm , i . e i ' m have the impression that getting it to say the right thing in the right circumstances is much more difficult than getting it to understand something given the circumstances and so on , just cuz it 's harder to learn to speak correctly in a foreign language , rather than learning to understand it . right ? just the fact that we 'll get that getting it to understand one construction does n't mean that it will n always know exactly when it 's correct to use that construction . right ? grad d: it 's it 's , i ' ve done generation and language production research for fo four and a half years . and so it 's you 're right , it 's not the same as the understanding . it 's in some ways easier and some ways harder . nuh ? but , it 'd be fun to look at it , or into that question . grad b: the basic idea i would be to give allow the system to have intentions , ? cuz that 's what needs to be added to the system for it . grad d: , look at th eee , even think even what it would be the prior intention . so let 's , let 's say we have this grad d: no . let 's we have to we have some top - down processing , given certain setting . ok , now we change nothing , and just say ask something . right ? what would it ask ? grad b: unless it was in a situation . we 'd have to set up a situation where , it did n't know where something was and it wanted to go there . grad b: which means that we 'd need to set up an intention inside of the system . right ? which is , " i where something is and i need to go there " . grad d: s it 's i i know it 's strange , but look at it look at our bayes - net . if we do n't have let 's assume we do n't have any input from the language . right ? so there 's also nothing we could query the ontology , but we have a certain user setting . if you just ask , what is the likelihood of that person wanting to enter some something , it 'll give you an answer . that 's just how they are . and so , @ whatever that is , it 's the generic default intention . that it would find out . which is , wanting to know where something is , maybe nnn and wanting i what it 's gon na be , but there 's gon na be something that grad e: you 're not gon na are you gon na get a variety of intentions out of that then ? , you 're just talking about like given this user , what 's the th what is it what is that user most likely to want to do ? grad d: you can observe some user and context and ask , what 's the posterior probabilities of all of our decision nodes . grad d: you could even say , " let 's take all the priors , let 's observe nothing " , and query all the posterior probabilities . it - it 's gon na tell us something . grad d: and yes . and come up with posterior probabilities for all the values of the decision nodes . which , if we have an algorithm that filters out whatever the best or the most consistent answer out of that , will give us the intention ex nihilo . and that is exactly what would happen if we ask it to produce an utterance , it would be b based on that extension , ex nihilo , which we what it is , but it 's there . so we would n't even have to t to kick start it by giving it a certain intention or observing anything on the decision node . and whatever that maybe that would lead to " what is the castle ? " , grad b: i what i ' m afraid of is if we do n't , set up a situation , we 'll just get a bunch of garbage out , like , everything 's exactly thirty percent . grad d: . so what we actually then need to do is write a little script that changes all the settings , go goes through all the permutations , which is we did a did n't we calculate that once ? grad b: cuz i went and looked at it cuz i was thinking , that could not be right , and it would it was on the order of twenty output nodes and something like twenty grad b: so to test every output node , would at least let 's see , so it would be two to the thirty for every output node ? which is very th very large . grad d: ! that 's n that 's nothing for those neural guys . , they train for millions and millions of epochs . grad b: , i was gon na take a drink of my water . i ' m talking about billions and a number two to the thirty is like a bhaskara said , we had calculated out and bhaskara believes that it 's larger than the number of particles in the universe . and if i grad e: i if that 's right or not . th - that 's big . that 's just that 's it 's a billion , right ? grad b: two to the thirty ? , two to the thirty is a billion , but if we have to do it two to the twenty times , then that 's a very large number . grad b: cuz you have to query the node , for every a , or query the net two to the twenty times . grad b: that 's @ that 's big . actually ! we calculated a different number before . how did we do that ? grad e: , it 's g anyway , that given all of these different factors , it 's e it 's still going to be impossible to run through all of the possible situations or whatever . grad b: if it takes us a second to do , for each one , and let 's say it 's twenty billion , then that 's twenty billion seconds , which is eva , do the math . grad d: . so , it be it 's an idea that one could n run past , what 's that guy 's name ? he - he 's usually here . tsk . j jer - jerj grad d: so this is just an idea that 's floating around and we 'll see what happens . and , what other news do i have ? we fixed some more things from the smartkom system , but that 's not really of general interest , ! questions , i 'll ask eva about the e bayes and she 's working on that . how is the generation xml thing ? grad d: ok . no need to do it today or tomorrow even . do it next week or grad b: i ' m gon na finish it today , hopefully . i wanna do one of those things where i stay here . cuz , if i go home , i ca n't finish it . i ' ve tried about five times so far , where i work for a while and then i ' m like , i ' m hungry . so i go home , and then grad b: either that or to myself , work at home . and then i try to work at home , but i fail miserably . like i ended up at blakes last night . grad b: no . i almost got into a brawl . but i did not finish the , but i ' ve been looking into it . i th @ it 's not like it 's a blank slate . i found everything that i need and stu and , grad b: at the b furthermore , i told jerry that i was gon na finish it before he got back . so . grad b: , we think we 'll see him definitely on tuesday for the next or , no , . the meetings are on thursday . grad b: i was thinking about that . i will try to work on the smartkom and i 'll if finish it today , i 'll help you with that tomorrow , if you work on it ? i do n't have a problem with us working on it though ? and it grad b: we just it would n't hurt to write up a paper , cuz then , i was talking with nancy and nancy said , you whether you have a paper to write up until you write it up . and since jerry 's coming back , we can run it by him too . grad e: , i do n't have much experience with , conference papers for compu in the computer science realm , and so when i looked at what you had , which was a complete submission , said what just i did n't really to do with it , like , this is the basic outline of the system or whatever , or here 's an idea , right ? that 's what that paper was , here 's one possible thing you could do , short , eight pages , and what you have in mind for expanding . like i 'd i what i did n't do is go to the web site of the conference and look at what they 're looking for or whatever . grad d: , it 's more it 's both , right ? it 's it 's t cognitive , neural , psycho , linguistic , but all for the sake of doing computer science . so it 's cognitive , psycho , neural , plausibly motivated , architectures of natural language processing . so it seems pretty interdisciplinary , and , w the keynote speaker is tomasello and blah - blah , grad d: so , w the question is what could we actually do and keep a straight face while doing it . grad d: , you can say we have done a little bit and that 's this , and the rest is position paper , we wanna also do that . which is not too good . might be more interesting to do something like let 's assume , we 're right , we have as jerry calls it , a delusion of adequacy , and take a " where is x " sentence , and say , " we will just talk about this , and how we cognitively , neurally , psycho - linguistically , construction grammar - ally , motivated , envision , understanding that " . grad d: so we can actually show how we parse it . that should be able to we should be able to come up with , a parse . it 's on , just put it on . grad a: was he supposed to harass me ? , he just told me that you came looking for me . grad b: so put that those things over your ears like that . see the p how the plastic things ar arch out like that ? there we go . grad a: i ' m . i ' m , these are all the same . ok ! th this is not very on target . grad d: and brought forth the idea that we take a sentence , " where is the powder - tower " , and we p pretend to parse it , we pretend to understand it , and we write about it . grad d: it 's the whatever , architectures , where there is this conference , it 's the seventh already international conference , on neu neurally , cognitively , motivated , architectures of natural language processing . grad a: is it normally like , dialogue systems , or , other nlp - ish things ? grad d: it would be to go write two papers actually . and one from your perspective , and one from our peve per grad a: , th that 's the kinda thing that maybe like , the general con like ntl - ish like , whatever , the previous simulation based pers maybe you 're talking about the same thing . a general paper about the approach here would probably be appropriate . and good to do at some point anyway . grad d: , i also think that if we write about what we have done in the past six months , we could craft a little paper that if it gets rejected , which could happen , does n't hurt because it 's something we grad d: having it is a good thing . it 's a exercise , it 's i usually enjoy writing papers . it 's not i do n't re regard it as a painful thing . grad d: and , we should all do more for our publication lists . and . it just never hurts . and keith and - or johno will go , probably . grad a: , it 's in germany . , ok . i s i see . tomasello 's already in germany anyway , so makes sense . ok . grad a: . ok . so , is the what are you just talking about , the details of how to do it , or whether to do it , or what it would be ? grad d: what is our what 's our take home message . what what do we actually because , it i do n't like papers where you just talk about what you plan to do . , it 's obvious that we ca n't do any evaluation , and have no , we ca n't write an acl type paper where we say , " ok , we ' ve done this and now we 're whatever percentage better than everybody else " . it 's far too early for that . but , we can tell them what we think . that 's never hurts to try . maybe even that 's maybe the time to introduce the new formalism that you guys have cooked up . grad e: my gosh . , you were we were talking about something which was much more like ten . grad d: no that 's actually a problem . it 's difficu it 's more difficult to write on four pages than on eight . grad a: it 's and it 's also difficult to even if you had a lot of substance , it 's hard to demonstrate that in four pages , . grad d: i maybe it 's just four thousand lines . i do i do n't they do n't want any they do n't have a tex f style @ guide . grad d: they just want ascii . pure ascii lines , whatever . why , for whatever reason , grad b: i would say that 's closer to six pages actually . four thousand lines of ascii ? grad e: four thousand lines . is n't a is n't it about fifty s fifty five , sixty lines to a page ? grad d: i d do n't quote me on this . this is numbers i have from looking o grad d: let 's let 's wh what should we , discuss this over tea and all of us look at the web ? , i ca n't . i ' m wizarding today . grad b: i got an email . , keith is comfortable with us calling him " keith " . grad d: w and a week of business in germany . i should mention that for you . and otherwise you have n't missed much , except for a really weird idea , but you 'll hear about that soon enough . grad a: the idea that you and i already know about ? that you already told me ? not that grad d: no , no . , that is something for the rest of the gang to g grad d: it 's a presumably one of the watergate codes they anyways , th , do n't make any plans for spring break next year . that 's grad d: that 's the other thing . we 're gon na do an int edu internal workshop in sicily . grad a: i know about that part . i know about the almond trees and . not joking . name a vegetable , ok . , kiwi ? grad a: . ok , but i was trying to find something that he did n't grow on his farm . grad e: , we 're gon na have an example case , right ? i m to like this " where is " case , . grad d: , maybe you have it would be the paper ha would have , in my vision , a flow if we could say , here is th the th here is parsing if you wanna do it c right , here is understanding if you wanna do it right , and without going into technical grad a: but then in the end we 're not doing like those things right yet , right ? would that be clear in the paper or not ? grad d: that would be clear , we would i mailed around a little paper that i have grad a: it would be like , this is the idea . , i did n't get that , grad d: see this , if you 're not around , and do n't partake in the discussions , and you do n't get any email , grad d: su so we could say this is what 's state of the art today . nuh ? and say , this is bad . nuh ? and then we can say , what we do is this . grad b: do n't you need to reduce it if it 's a or reduce it , if it 's a cognitive neuro grad a: , you do n't have t the conference may be cognitive neural , does n't mean that every paper has to be both . like , nlp cognitive neural . grad d: , and you can just point to the literature , you can say that construction - based grad a: so i so this paper would n't particularly deal with that side although it could reference the ntl - ish , like , approach . the fact that the methods here are all compatible with or designed to be compatible with whatever , neurological neuro - biol su . , i four pages you could definitely it 's definitely possible to do it . it 's just it 'd just be small . like introducing the formalism might be not really possible in detail , but you can use an example of it . grad e: , l looking at , looking at that paper that you had , like , you did n't really explain in detail what was going on in the xml cases or whatever you just sorta said , here 's the general idea , some gets put in there . , hopefully you can say something like constituents tells you what the construction is made out of , without going into this intense detail . grad a: , . so it be like using the formalism rather than , introducing it per se . so . grad e: give them the one paragraph whirlwind tour of w what this is for , and . grad d: so this will be documenting what we think , and documenting what we have in terms of the bayes - net . and since there 's never a bad idea to document things , no ? grad d: that would be my , we we should sketch out the details maybe tomorrow afternoon - ish , if everyone is around . you probably would n't be part of it . grad d: maybe you want ? think about it . you may ruin your career forever , if you appear . grad d: the , other thing , we actually have we made any progress on what we decided , last week ? i ' m you read the transcript of last week 's meeting in red so sh so you 're up to dated caught up . grad d: we decided t that we 're gon na take a " where is something " question , and pretend we have parsed it , and see what we could possibly hope to observe on the discourse side . grad b: remember i came in and i started asking you about how we were sor going to sort out the , decision nodes ? grad b: , there was like we needed to or , in my opinion we need to design a bayes another sub - bayes - net , it was whether we would have a bayes - net on the output and on the input , or whether the construction was gon na be in the bayes - net , grad d: should i introduce it as sudo - square ? we have to put this in the paper . if we write it . this is this is my only constraint . the th the sudo - square { nonvocalsound } is , " situation " , " user " , " discourse " , right ? " ontology " . grad d: , also he 's talking about suicide , and that 's not a notion i wanna have evoked . grad e: it sounds too rocking for that . anyway . so , what 's going on here ? so what are what grad d: , these are our , whatever , belief - net decision nodes , and they all contribute to these { nonvocalsound } things down here . grad d: , in the moment it 's a bayes - net . and it has fifty not - yet - specified interfaces . ok . i have taken care that we actually can build little interfaces , { nonvocalsound } to other modules that will tell us whether the user likes these things and , n the or these things , and he whether he 's in a wheelchair or not , grad e: so , what a what are these letters again , situr - situation , user , discourse and grad d: and w i s i irena gurevich is going to be here , end of july . she 's a new linguist working for eml . and what she would like to do is great for us . she would like to take the ent ontolog grad d: think of back at the eva vector , and johno coming up with the idea that if the person discussed the admission fee , in previously , that might be a good indication that , " how do i get to the castle ? " , actually he wants to enter . or , " how do i get to x ? " discussing the admission fee in the previous utterance , is a good indication . we do n't want a hard code , a set of lexemes , or things , that person 's , filter , or search the discourse history . so what would be is that if we encounter concepts that are castle , tower , bank , hotel , we run it through the ontology , and the ontology tells us it has , admission , opening times , it has admission fees , it has this , it has that , and then we make a thesaurus lexicon , look up , and then search dynamically through the , discourse history for occurrences of these things in a given window of utterances . and that might , give us additional input to belief a versus b . or e versus a . grad a: so it 's not just a particular word 's ok , so the you 're looking for a few keys that are cues to , a few specific cues to some intention . grad e: that you that when someone 's talking about a castle , that it 's the thing that people are likely to wanna go into ? or , is it the fact that if there 's an admission fee , then one of the things we know about admission fees is that you pay them in order to go in ? and then the idea of entering is active in the discourse ? and then blah - blah ? grad d: the idea is even more general . the idea is to say , we encounter a certain entity in a utterance . so le let 's look up everything we the ontology gives us about that entity , what it does , what roles it has , what parts , whatever it has . functions . and , then we look in the discourse , whether any of that , or any surface structure corresponding to these roles , functions aaa has ever occurred . and then , the discourse history can t tell us , " , or " no " . and then it 's up for us to decide what to do with it . t so i grad d: so , we may think that if you say , " where is the theater " , whether or not he has talked about tickets before , then we he 's probably wanna go there to see something . or " where is the opera in par - paris ? , lots of people go to the opera to take pictures of it and to look at it , grad d: and lots of people go to attend a performance . and , the discourse can maybe tell us w what 's more likely if we to look for in previous statements . and so we can hard code " for opera , look for tickets , look for this , look for that , or look for mozart , look for thi " but the smarter way is to go via the ontology and dynamically , then look up u . grad e: ok . but you 're still doing look up so that when the person so that when the person says , " where is it ? " then you say , let 's go back and look at other things and then decide , rather than the other possibility which is that all through discourse as they talk about different things like w prior to the " where is it " question they say , " how much does it cost to get in , to see a movie around here " , " where is the closest theater " the that by mentioning admission fees , that just stays active now . that becomes part of like , their current ongoing active conceptual structure . and then , over in your bayes - net or whatever , when the person says " where is it " , you ' ve already got , since they were talking about admission , and that evokes the idea of entering , then when they go and ask " where is it " , then you 're enter node is already active because that 's what the person is thinking about . that 's the cognitive linguistic - y way , grad d: that 's the correct way . so , we have to keep memory of what was the last intention , and how does it fit to this , and what does it tell us , in terms of the what we 're examining . grad d: and furthermore , we can idealize that , people do n't change topics , but they do . but , even th for that , there is a student of ours who 's doing a dialogue act , recognition module . so , maybe , we 're even in a position where we can take your approach , which is much better , as to say how do these pieces grad d: . how how do these pieces fit together ? but , ok , nevertheless . so these are issues but we what we actually decided last week , is to , and this is , again , for your benefit is to , pretend we have observed and parsed an utterance such as " where is the powder - tower " , or " where is the zoo " , and specify , what we think the output , observe , out i input nodes for our bayes - nets for the sub - d , for the discourse bit , should be . so that and i will then come up with the ontology side , bits and pieces , so that we can say , ok we always just look at this utterance . that 's the only utterance we can do , it 's hard coded , like srini , hand parsed , hand crafted , but this is what we hope to be able to observe in general from utterances , and from ontologies , and then we can fiddle with these things to see what it actually produces , in terms of output . so we need to find out what the " where is x " construction will give us in terms of semantics and simspec type things . grad a: just ok . just " where is x " ? or any variants of that . grad d: no ! , look at it this way , i what did we decide . we decided the prototypical " where is x " , where , we do n't really know , does he wanna go there , or just wanna know where it is . grad d: so the difference of " where is the railway station " , versus where " where is greenland " . nuh ? grad e: so , we 're supposed to we 're talking about anything that has the semantics of request for location , right ? actually ? or , anyway , the node in the ultimate , in the bayes - net thing when you 're done , the node that we 're talking about , is one that says " request for location , true " , like that , right ? , and exactly how that gets activated , like whether we want the sentence " how do i get there ? " to activate that node or not , that 's the issue that the linguistic - y side has to deal with , right ? grad d: , but it yea - nnn actually more m more the other way around . we wanted something that represents uncertainty we in terms of going there or just wanting to know where it is , . some generic information . and so this is prototypically @ found in the " where is something " question , surface structure , grad d: which can be p , should be maps to something that activates both . the idea is to grad b: i do n't see unde how we would be able to distinguish between the two intentions just from the g utterance , though . , bef or , before we do n't before we cranked it through the bayes - net . grad b: ok , but then so it 's just a for every construction we have a node in the net , right ? and we turn on that node . grad b: and then given that we know that the construction has these two things , we can set up probabilities we can s define all the tables for ev for those grad d: , it should be so we have i let 's assume we call something like a loc - x node and a path - x node . and what we actually get if we just look at the discourse , " where is x " should activate or should . should be both , whereas maybe " where is x located " , we find from the data , is always just asked when the person wants to know where it is , and " how do i get to " is always asked when the person just wants to know how to get there . so we want to come up with what gets , input , and how inter in case of a " where is " question . so what would the outcome of your parser look like ? and , what other discourse information from the discourse history could we hope to get , squeeze out of that utterance ? so define the input into the bayes - net based on what the utterance , " where is x " , gives us . so definitely have an entity node here which is activated via the ontology , grad d: so " where is x " produces something that is s stands for x , whether it 's castle , bank , restroom , toilet , whatever . and then the ontology will tell us grad a: that it has a location like that ? or th the ontology will tell us where actually it is located ? grad d: no . not . where it is located , we have , a user proximity node here somewhere , grad d: e which tells us how far the user how far away the user is in respect to that entity . grad a: so you 're talking about , the construction involves this entity or refers to this entity , and from the construction also that it is a location is or a thing that can be located . right ? ontology says this thing has a location slot . sh - and that 's the thing that is being that is the content of the question that 's being queried by one interpretation of " where is x " . and another one is , path from current user current location to that location . so is the question it 's just that i ' m not what the is the question , for this particular construction how we specify that 's the information it provides ? or or asked for ? b both sides , right ? grad d: , you do n't need to even do that . it 's just what would be @ observed in that case . grad a: observed when you heard the speaker say " where is x " , or when that 's been parsed ? so these little circles you have by the d ? is that ? grad b: i d i do n't like having characterizing the constructions with location and path , or li characterizing them like that . cuz you do n't it seems like in the general case you would n't know how to characterize them . grad b: or , for when . there could be an interpretation that we do n't have a node for in the it just seems like @ has to have a node for the construction and then let the chips fall where they may . versus , saying , this construction either can mean location or path . and , in this cas and since it can mean either of those things , it would light both of those up . grad d: it will be the same . so r in here we have " i 'll go there " , right ? grad d: and we have our info - on . so in my c my case , this would make this happy , and this would make the go - there happy . what you 're saying is we have a where - x question , where - x node , that makes both happy . that 's what you 're proposing , which is , in my mind just as fine . so w if we have a construction node , " where is x " , it 's gon na both get the po posterior probability that it 's info - on up , grad d: info - on is true - up , and that go - there is true - up , as . which would be exactly analogous to what i ' m proposing is , this makes something here true , and this makes something also something here true , and this makes this true - up , and this makes this true - up as . grad e: i kinda like it better without that extra level of indirection too . with this points to that , and so on because i , it grad d: , because we get tons of constructions . because , mmm people have many ways of asking for the same thing , grad a: so i agree with that . i have a different kinda question , might be related , which is , ok so implicitly everything in edu , we 're always inferring the speaker intent , right ? like , what they want either , the information that they want , or it 's always information that they want probably , of some kind . right ? or i , or what 's something that they grad a: i do n't so , let 's see . so i if the i if th just there 's more s here that 's not shown that you it 's already like part of the system whatever , but , " where is x " , like , the fact that it is , a speech - act , whatever , it is a question . it 's a question that , queries on some particular thing x , and x is that location . there 's , like , a lot of structure in representing that . so that seems different from just having the node " location - x " and that goes into edu , right ? grad d: ok , the next one would be here , just for mood . the next one would be what we can squeeze out of the i , maybe we wanna observe the , the length of the words used , and , or the prosody grad a: , so in some ways in the other parallel set of mo more linguistic meetings we ' ve been talking about possible semantics of some construction . where it was the simulation that 's , according to it , that corresponds to it , and as the as discourse , whatever , conte infor in discourse information , such as the mood , and , other . so , are we looking for a abbreviation of that , that 's tailored to this problem ? cuz that has , , s it 's in progress still it 's in development still , but it definitely has various feature slots , attributes , bindings between things grad d: u that 's exactly r , why i ' m proposing it 's too early to have to think of them of all of these discourse things that one could possibly observe , so let 's just assume grad d: human beings are not allowed to ask anything but " where is x " . this is the only utterance in the world . what could we observe from that ? grad a: not the choices of " where is x " or " how do i get to x " . just " where is x " . grad d: just just " where is x " . and , but , do it in such a way that we know that people can also say , " is the town hall in front of the bank " , so that we need something like a w wh focus . nuh ? should be should be there , that , this the whatever we get from the grad d: where possible . right ? if i if it 's not triggered by our thing , then it 's irrelevant , and it does n't hurt to leave it out for the moment . , but grad a: , it seems like , " where is x " , the fact that it might mean , " tell me how to get to x " , like do y so , would you wanna say that those two are both , like those are the two interpretations , right ? the ones that are location or path . so , you could say that the s construction is a question asking about this location , and then you can additionally infer , if they 're asking about the location , it 's because they wanna go to that place , in which case , the you 're jumping a step and saying , " , i know where it is but i also know how to get they wanna seem they seem to wanna get there so i ' m gon na tell them " . so there 's like structure grad e: right , th this it 's not that this is like semantically ambiguous between these two . grad e: it 's really about this but why would you care about this ? , it 's because you also want to know this , like that right ? grad a: so it 's like you infer the speaker intent , and then infer a plan , a larger plan from that , for which you have the additional information , you 're just being extra helpful . grad d: think , this is just a mental exercise . if you think about , focus on this question , how would you design that ? is it do you feel confident about saying this is part of the language already to detect those plans , and why would anyone care about location , if not , and . or do you actually , this is perfectly legitimate , and i would not have any problems with erasing this and say , that 's all we can activate , based on the utterance out of context . grad d: and then the miracle that we get out the intention , go - there , happens , based on what we know about that entity , about the user , about his various beliefs , goals , desires , blah - blah . grad d: fine . but this is the thing , i propose that we think about , so that we actually end up with , nodes for the discourse and ontology so that we can put them into our bayes - net , never change them , so we all there is " where is x " , and , eva can play around with the observed things , and we can run our better javabayes , and have it produce some output . and for the first time in th in the world , we look at our output , and see whether it 's any good . grad d: , for me this is just a ba matter of curiosity , i wanna would like to look at , what this ad - hoc process of designing a belief - net would actually produce . grad d: if if we ask it where is something . and , maybe it also h enables you to think about certain things more specifically , come up with interesting questions , to which you can find interesting answers . and , additionally it might fit in really nicely with the paper . because if we want an example for the paper , i suggest there it is . so th this might be a opening paragraph for the paper as saying , " people look at kinds of at ambiguities " , and , in the literature there 's " bank " and whatever kinds of garden path phenomenon . and we can say , that 's all nonsense . a , these things are never really ambiguous in discourse , b , do n't ever occur really in discourse , but normal statements that seem completely unambiguous , such as " where is the blah - blah " , actually are terribly complex , and completely ambiguous . and so , what every everybody else has been doing so far in , has been completely nonsensical , and can all go into the wastepaper bin , and the only grad d: overture , but , just not really ok , i ' m eja exaggerating , but that might be , saying " hey " , some is actually complex , if you look at it in the vacuum and ceases to be complex in reality . and some that 's as that 's straightforward in the vacuum , is actually terribly complex in reality . would be , also , bottom - up linguistics , type message . grad d: versus the old top - down school . i ' m running out of time . grad d: at four ten . ok , this is the other bit of news . the subjects today know fey , so she ca n't be here , and do the wizarding . so i ' m gon na do the wizarding and thilo 's gon na do the instructing . also we 're getting a person who just got fired , from her job . a person from oakland who is interested in maybe continuing the wizard bit once fey leaves in august . and , she 's gon na look at it today . which is good news in the sense that if we want to continue , after the thir after july , we can . we could . and that 's also maybe interesting for keith and whoever , if you wanna get some more into the data collection . remember this , we can completely change the set - up any time we want . look at the results we ' ve gotten so far for the first , whatever , fifty some subjects ? grad d: no , we 're approaching twenty now . but , until fey is leaving , we surely will hit the some of the higher numbers . so that 's . can a do more funky . grad e: , i 'll have to look more into that data . is that around ? like , cuz that 's getting posted right away when you get it ? or ? i it has to be transcribed , ? grad d: we have , found someone here who 's hand st hand transcribing the first twelve . first dozen subjects just so we can build a language model for the recognizer . but , so those should be available soon . the first twelve . ch st e grad e: that i looked at the first one and got enough data to keep me going for , probably most of july . so . but , . , a probably not the right way to do it actually . grad d: but you can listen to a y you can listen to all of them from your solaris box . if you want . it 's always fun . ###summary: an idea for future work was suggested during the visit of the german project manager: the possibility to use the same system for language generation. having a system able to ask questions could contribute significantly to training the belief-net. setting up certain inputs in the bayes-net would imply certain intentions , which would trigger dialogues. there is potential to make a conference paper out of presenting the current work and the project aspirations within a parsing paradigm. the focus should be the bayes-net , to which all other modules interface. situation , user , discourse and ontology feed into the net to infer user intentions. someone asking where the castle is after having asked about the admission fee , indicates that -given that the castle is open to tourists- they want to go there , as opposed to knowing its whereabouts. it was suggested that they start analysing what the discourse and ontology would give as inputs to the bayes-net by working on simple utterances like "where is x?". with this addition , all input layers of the net would be functioning. although this function would be limited , it would allow for the bayes-net to be tested in its entirety and , henceforth , extended. the possibility of incorporating language generation into the system will have to be discussed further. similarly , as no one could recall some of the points of the conference call , the group will have to meet again and define the exact structure and content of the paper they are going to submit. the bayes-net is going to be the focus of the presentation. in order to complete a functioning prototype of the belief-net , it was decided to start expanding the ontology and discourse nodes by working with a simple construction , like "where is x?". a robust analysis of such a basic utterance will indicate what the limits of the information derived from the construction are , as well as ways to design the whole module and fit other constructions in. the idea to create a language generation module for the system , along with the language understanding , was met with interest , although it was made clear that generation is not just the inverse of understanding. understanding what a construction entails does not mean the system can use the construction in all appropriate circumstances. a dialogue producing system would be useful for training the system further , even though the number of input permutations could render the process computationally unwieldy. regarding the conference paper , it was noted that at this stage they have not completed any big parts of the system and there is no evaluation. similarly , the length of the paper would not allow for presentation of the formalism in detail. the focus would have to be on cognitive motivations of the research , and not on system design , anyway. such motivations also apply to the belief-net: there are various direct or indirect ways to link features of the ontology or discourse with specific intentions. the originating observation behind the whole project is that utterances like "where is x?" are seemingly unambiguous , but , in context , they can acquire much more complex interpretations. the smartkom prototype was in need of de-bugging , which is now on its way. similarly , the work on xml is going to be finished within a day. on the other hand , the data recording has started: almost twenty subjects have already taken part and the transcription of the recordings is running in parallel. meanwhile a new person , who is also a possible replacement for the wizard's task in the data collection , has been hired.
4
grad g: no . , i do n't think they 're designed to be over your ears . phd b: , i know . it just it really hurts . it gives you a headache , like if you on your temple professor d: so are we recording now ? is this ! we 're we 're live . ok . so , what were we gon na talk about again ? so we said data collection , which we 're doing . professor a: ok . do we do th do you go around the room and do names or anything ? grad g: u usually we ' ve done that and also we ' ve s done digits as , but i forgot to print any out . so . besides with this big a group , phd e: y your thing { nonvocalsound } may be pointing in a funny direction . it 's it helps if it points upwards . professor d: , the element , n should be as close to you your mouth as possible . professor d: alright . so what we had was that we were gon na talk about data collection , and , , you put up there data format , and other tasks during data collection , professor a: so , the goal was what can we do how can you do the data collection differently to get professor a: what can you add to it to get , some information that would be helpful for the user - interface design ? like professor a: especially for querying . so , getting people to do queries afterwards , getting people to do summaries afterwards . postdoc h: , one thing that came up in the morning was the , i , if he i , if he has s i do n't remember , mister lan - doctor landry ? postdoc h: la - landay ? so he has , these , , tsk note - taking things , postdoc h: then that would be a summary which you would n't have to solicit . y if we were able to do that . professor a: , if you actually take notes as a summary as opposed to n take notes in the sense of taking advantage of the time - stamps . so action item or , reminder to send this to so - and - so , blah - blah . professor a: so that would n't be a summary . that would just be that would b relate to the query side . grad g: but if we had the crosspads , we could ask people , if something comes up write it down and mark it somehow , phd e: right . , we because you 'd have several people with these pads , you could collect different things . phd f: , the down - side to that is that he indicated that the , quality of the handwriting recognition was quite poor . grad g: but that 's alright . i do n't think there 'd be so many that you could n't have someone clean it up pretty easily . professor a: . we also could come up with some code for things that people want to do so that for frequent things . and the other things , people can write whatever they want . , it 's to some extent , for his benefit . so , if that , if we just keep it simple then maybe it 's still useful . professor d: realized we skipped the part that we were saying we were gon na do at the front where we each said who we were . professor d: no , not a no , my mind went elsewhere . so , , i ' m morgan , where am i ? i ' m on channel three . postdoc h: should we have used pseudo - names ? should we do it a second time with pseudo no . grad g: , and eventually once this room gets a little more organized , the jimlets will be mounted under the table , and these guys will be permanently mounted somehow . , probably with double - sided tape , but so . you so we wo n't have to go through that . postdoc h: i have a question on protocol in these meetings , which is when you say " jimlet " and the person listening wo n't that is , sh shou how how do we get is that important information ? , the jimlet , the box that contains the professor d: , suppose we broaden out and go to a range of meetings besides just these internal ones . there 's gon na be lots of things that any group of people who know each other have in column common that we will not know . professor d: , we were originally gon na do this with vlsi design , and the reason we did n't go straight to that was because immediately ninety percent of what we heard would be jargon to us . so . professor a: , u so , actually there 's three issues . there 's the crosspad issue . should we do it and , if so , what 'll we have them do ? , do we have s people write summaries ? everybody or one person ? and then , do we ask people for how they would query things ? is that phd f: there 's there 're sub - problems in that , in that where or when do you actually ask them about that ? , that was one thing i was thinking about was is that dan said earlier that , maybe two weeks later , which is when you would want to query these things , you might ask them then . but there 's a problem with that in that if you 're not if you do n't have an interactive system , it 's gon na be hard to go beyond the first level of question . and furth i d explore the data further . so . professor d: which is , we certainly do want to branch out beyond , recording meetings about meeting recorder . and , once we get out beyond our little group , the people 's motivation factor , reduces enormously . and if we start giving them a bunch of other things to do , how , we did n another meeting here for another group and , they were fine with it . but if we 'd said , " ok , now all eight of you have to come up with , the summar " grad g: i did n't follow up either . so i did n't track them down and say " do th do it now " . but , no one spontaneously provided anything . professor d: i ' m worried that if you did even if you did push them into it , it might be semi - random , as opposed to what you 'd really want to know if you were gon na use this thing . grad g: how else to generate the queries other than getting an expert to actually listen to the meeting and say " that 's important , that might be a query " . postdoc h: tsk . , there is this other thing which y which you were alluding to earlier , which is , there are certain key words like , " action item " and things like that , which could be used in , t to some degree finding the structure . postdoc h: and and i also , was thinking , with reference to the n , note - taking , the advantage there is that you get structure without the person having to do something artificial later . and the fir third thing i wanted to say is the summaries afterwards , they should be recorded instead of written because that , it would take so long for people to write that you would n't get as good a summary . professor a: how about this idea ? that normally at most meetings somebody is delegated to be a note - taker . grad g: , that gives you a summary but it does n't really how do you generate queries from that ? phd e: . but , maybe a summary is one of the things we 'd want from the output of the system . right ? , they 're something . it 's a output you 'd like . grad g: , james and i were talking about this during one of the breaks . and the problem with that is , i ' m definitely going to do something with information retrieval even if it 's not full - bore what i ' m gon na do for my thesis . i ' m gon na do something . i ' m not gon na do anything with summarization . and so if someone wants to do that , that 's fine , but it 's not gon na be me . professor d: , that we , the f the core thing is that once we get some of these issues nailed down , we need to do a bunch of recordings and send them off to ibm and get a bunch of transcriptions even if they 're slightly flawed or need some other and then we 'll have some data there . and then , i we can start l looking and thinking , what do we want to know about these things and at the very least . phd b: i actually want to say something about the note pad . so , if you could sense just when people are writing , and you tell them not to doodle , or try not to be using that for other purposes , and each person has a note pad . they just get it when they come in the room . then you c you can just have a fff plot of wh , who 's writing when . that 's all you phd b: and , you can also have notes of the meeting . but i bet that 's that will allow you to go into the hot places where people are writing things down . phd b: , you can tell when you 're in a meeting when everybody stops to write something down that something was just said . it may not be kept in the later summary , but at that point in time is was something that was important . and that would n't take any extra professor d: that 's a good idea but that does n't maybe i ' m missing something , but that does n't get to the question of how we come up with queries , right ? phd b: , then you can go to the points where the you could actually go to those points in time and find out what they were talking about . and you r professor a: it 's an interesting thing . i do n't think it gets at the queries per - se , but it does give us an information fusion thing that , you wanna i say " what were the hot - points of the meeting ? " professor d: that that 's what , is that it gets at something interesting but if we were asking the question , which we were , of , " how do we figure out what 's the nature of the queries that people are gon na want to ask of such a system ? " , knowing what 's important does n't tell you what people are going to be asking . professor a: you could say they 're gon na ask about , when did so - and - so s talk about blah . and at least that gives you the word that they might run a query on . phd b: we do n't even , if you want to find out what any user will use , that might be true for one domain and one user , but a different domain and a different user professor d: , but we 're just looking for a place to start with that because , th what james is gon na be doing is looking at the user - interface and he 's looking at the query in i we we have five hours of pilot data of the other but we have zero hours of queries . so he 's just going " where do i start ? " professor a: w , th you could do the summaries actually may help get us there , for a couple reasons . one , if you have a summary if you have a bunch of summaries , you can do a word frequency count and see what words come up in different types of meetings . so " action item " is gon na come up whether it 's a vlsi meeting , or speech meeting , or whatever . so words that come up in different types of meetings may be something that you would want to query about . professor a: , the second thing you could possibly do with it is just run a little pilot experiment with somebody saying " here 's a summary of a meeting , what questions might you want to ask about it to go back ? " grad g: , that 's difficult because then they 're not gon na ask the questions that are in the summary . but , it would give professor a: that 's one possi one possible scenario , though , is you have the summary , and you want to ask questions to get more detail . grad g: th , it has to be a participant . , it does n't have to be . ok . so that is another use of meeting recorder that we have n't really talked about , which is for someone else , as opposed to as a remembrance agent , which is what had been my primary thought in the information retrieval part of it would be . but , i if you had a meeting participant , they could use the summary to refresh themselves about the meeting and then make up queries . but it 's not i how to do it if until you have a system . phd e: but th there is this , there is this class of queries , which are the things that you did n't realize were important at the time but some in retrospect you think " , hang on , did n't we talk about that ? " and it 's something that did n't appear in the summary but you and that 's what this , complete data capture is nicest for . phd e: cuz it 's the things that you would n't have bothered to make an effort to record but they get recorded . so , and th there 's no way of generating those , u until we just until they actually occur . phd e: , it 's like right , right . exactly . but , it 's difficult to say " and if i was gon na ask four questions about this , what would they be ? " those are n't the things that come up . postdoc h: i also think that w if you can use the summaries as an indication of the important points of the meeting , then you might get something like y so if th if the obscure item you want to know more about was some form of data collection , maybe the summary would say , " we discussed types of na data collection " . and , and and maybe you could get to it by that . if you if you had the larger structure of the discourse , then if you can categorize what it is that you 're looking for with reference to those l those larger headings , then you can find it even if you do n't have a direct route to that . grad g: mmm . although it seems like that 's , a high burden on the note - taker . grad g: that 's a pretty fine grain that the note - taker will have to take . professor a: i th no . you got to have somebody who knows the pro knows the topic or , whose job it is delegated to be the note - taker . phd b: no , but someone who can come sit in on the meetings and then takes the notes with them that the real note - taker phd b: and that way that one student has , a rough idea of what was going on , and they can use it for their research . , this is n't really necessarily what you would do in a real system , because that 's a lot of trouble and maybe it 's not the best way to do it . but if he has some students that want to study that then they should get to know the people and attend those meetings , and get the notes from the note - taker . grad g: right . , that 's a little bit of a problem . their note - taking application they ' ve been doing for the last couple of years , and i do n't think anyone is still working on it . they 're done . , so i ' m not that they have anyone currently working on notes . so what we 'd have to interest someone in is the combination of note and speech . grad g: and so the question is " is there such a person ? " and right now , the answer is " no " . professor d: i ' ve been thinking about it a little bit here about the , th this , e that the now i ' m thinking that the summary a summary , is actually a reasonable , bootstrap into this into what we 'd like to get at . it 's it 's not ideal , but we , we have to get started someplace . so i was just thinking about , suppose we wanted to get w we have this collection of meeting . we have five hours of . , we get that transcribed . so now we have five hours of meetings and , you ask me , " morgan , what d , what questions do you want to ask ? " , i would n't have any idea what questions i want to ask . i 'd have to get started someplace . so if i looked at summary of it , i 'd go " , i was in that meeting , i remember that , what was the part that " and and th that might then help me to think of things even things that are n't listed in the summary , but just as a refresh of what the general thing was going on in the meeting . professor a: it serves two purpo purposes . one , as a refresh to help bootstrap queries , but also , maybe we do want to generate summaries . and then it 's , it 's a key . professor a: this one being different , but in most meetings that i attend , there 's somebody t explicitly taking notes , frequently on a laptop , you can just make it be on a laptop , so then yo you 're dealing with ascii and not somebody you do n't have to go through handwriting recognition . , and then they post - edit it into , a summary and they email it out for minutes . , that happens in most meetings . postdoc h: i that , there 's we 're using " summary " in two different ways . so what you just described i would describe as " minutes " . postdoc h: and what i originally thought was , if you asked someone " what was the meeting about ? " postdoc h: and then they would say " , we talked about this and then we talked about that , and so - and - so talked about " and then you 'd have , like i e my thought was to have multiple people summarize it , on recording rather than writing because writing takes time and you get irrelevant other things that u take time , that whereas if you just say it immediately after the meeting , a two - minute summary of what the meeting was about , you would get , with mult see , i also worry about having a single note - taker because that 's just one person 's perception . and , , it 's releva it 's relative to what you 're focus was on that meeting , and people have different major topics that they 're interested in . postdoc h: so , my proposal would be that it may be worth considering both of those types , the note - taking and a spontaneous oral summary afterwards , no longer than two minutes , professor d: you can correct me on this , but , my impression was that , , true that the meetings here , nobody sits with a w , with a laptop phd e: i ' ve d when we when we have other meetings . when i have meetings on the european projects , we have someone taking notes . professor d: right ? where you ' ve got fifteen peo , most th this is one of the larger meetings . most of the meetings we have are four or five people professor d: and you 're not you do n't have somebody sitting and taking minutes for it . you just get together and talk about where you are . professor a: so , it depends on whether it 's a business meeting or a technical discussion . phd b: i how , but , the outline is up here and that 's what people are seeing . and if you have a or you shou could tell people not to use the boards . but there 's this missing information otherwise . phd e: just to take pictures of who 's there , where the microphones are , and then we could also put in what 's on the board . , like three or four snaps for every phd b: people who were never at the meeting will have a very hard time understanding it otherwise . phd e: , no . , think , that right now we do n't make a record of where people are sitting on the tables . and that the at some point that might be awfully useful . phd e: not not as part of the not as a part of the data that you have to recover . phd b: someone later might be able to take these and say " ok , they , at least these are the people who were there professor d: u , liz , you sa you sat in on the , subcommittee meeting or whatever professor d: , on you on the subcommittee meeting for at the , that workshop we were at that , mark liberman was having . so i was n't there . they they h must have had some discussion about video and the visual aspect , and all that . phd b: big , big interest . huge . , it personally , i do n't i would never want to deal with it . but i ' m just saying first of all there 's a whole bunch of fusion issues that darpa 's interested in . , fusing gesture and face recognition , even lip movement and things like that , for this task . and there 's also a personal interest on the part of mark liberman in this in storing these images in any data we collect so that later we can do other things with it . professor d: , you , that the key thing there is that this is a description of database collection effort that they 're talking about doing . and if the database exists and includes some visual information that does n't mean that an individual researcher is going to make any use of it . grad g: but that it 's gon na be a lot of effort on our part to create it , and store it , and get all the standards , and to do anything with it . professor d: so we 're gon na do what we 're gon na do , whatever 's reasonable for us . phd b: like i know with atis , we just had a tape recorder running all the time . and later on it turned out it was really good that you had a tape recorder of what was happening , even though you w you just got the speech from the machine . so if you can find some really , low , perplexity , professor d: , minimally , what dan is referring to at least having some representation of the p the spatial position of the people , cuz we are interested in some spatial processing . and so , grad g: , once the room is a little more fixed that 's a little easier cuz you 'll , the wireless . but phd b: also cmu has been doing this and they were the most vocal at this meeting , alex waibel 's group . and they have said , i talked to the student who had done this , that with two fairly inexpensive cameras they just recorded all the time and were able to get all the information from or maybe it was three from all the parts of the room . so we would be we might lose the chance to use this data for somebody later who wants to do some processing on it if we do n't collect it . grad g: i do n't disagree . that if you have that , then people who are interested in vision can use this database . the problem with it is you 'll have more people who do n't want to be filmed than who do n't want to be recorded . grad g: so that there 's going to be another group of people who are gon na say " i wo n't participate " . phd b: or you could put a paper bag over everybody 's head and not look at each other and not look at boards , and just all be sitting talking . that would be an interes bu postdoc h: , there 's that 'd be the parallel , . but y she 's we 're just proposing a minimal preservation of things on boards , postdoc h: sp spatial organization and you could anonymize the faces for that matter . , this is phd b: they have a pretty crude set - up . and they had they just turn on these cameras . they were they were not moving or anything . phd b: and they did n't actually align it or anything . they just they have it , though . postdoc h: , it 's worth considering . maybe we do n't want to spend that much more time discussing it , professor d: we might try that some and we certainly already have some recordings that do n't have that , which , we 'll get other value out of , . postdoc h: th , if it 's easy to collect it th then it 's a wise thing to do because once it 's gone . and phd b: i ' m just the community if ldc collects this data u , and l - if mark liberman is a strong proponent of how they collect it and what they collect , there will probably be some video data in there . phd b: and so that could argue for us not doing it or it could argue for us doing it . the only place where it overlaps is when some of the summarization issues are actually could be , easier made easier if you had the video . professor d: at the moment we should be determining this on the basis of our own , interests and needs rather than hypothetical ones from a community thing . as you say , if they decide it 's really critical then they will collect a lot more data than we can afford to , and will include all that . professor d: i ' m not worried about the cost of setting it up . i ' m worried about the cost of people looking at it . in other words , it 's it 'd be silly to collect it all and not look at it . and so i that we do have to do some picking and choosing of the that we 're doing . but i am int i do think that we m minimally want something we might want to look at some , subsets of that . like for a meeting like this , at least , take a polaroid of the of the boards , phd b: of the board . or at least make that the note - taker takes a sh , a snapshot of the board . phd b: that 'll make it a lot easier for meetings that are structured . , otherwise later on if nobody wrote this on the board down we 'd have a harder time summarizing it or agreeing on a summary . postdoc h: we and it especially since this is common knowledge . , this is shared knowledge among all the participants , and it 's a shame to keep it off the recording . grad g: er , if we were n't recording this , this would get lost . right ? grad g: that we 're not saving it anyway . right ? in in our real - life setting . professor a: what do you mean we 're not saving it anyway ? i ' ve written all of this down and it 's getting emailed to you . phd b: right . that would be the other alternative , to make that anything that was on the board , is in the record . professor a: , that 's why i ' m saying that the note - taking would be in many for many meetings there will be some note - taking , in which case , that 's a useful thing to have , we do n't need to require it . just like the , it would be great if we try to get a picture with every meeting . , so we wo n't worry about requiring these things , but the more things that we can get it for , the more useful it will be for various applications . so . professor d: so , departing for the moment from the data collection question but actually talking about , this group and what we actually want to do , so i that 's th the way what you were figuring on doing was , putting together some notes and sending them to everybody from today ? professor d: so so the question that we started with was whether there was anything else we should do during th during the collection . professor d: and i the crosspads was certainly one idea , and we 'll get them from him and we 'll just do that . and then the next thing we talked about was the summaries and are we gon na do anything about that . professor a: , before we leave the crosspads and call it done . so , if i ' m collecting data then there is this question of do i use crosspads ? so , that if we really have me collect data and i ca n't use crosspads , it 's probably less useful for you guys to go to the trouble of using it , unless you think that the crosspads are gon na n i ' m not what they 're gon na do . but but having a small percentage of the data with it , i ' m not whether that 's useful or not . maybe maybe it 's no big deal . professor d: i the point was to try again , to try to collect more information that could be useful later for the ui . so it 's landay supplying it so that landay 's can be easier to do . so it right now he 's g operating from zero , professor d: and so even if we did n't get it done from uw , it seems like that would could still you shou , at least try it . phd b: it 'd be useful to have a small amount of it just as a proof of concept . it will , what you can do with things . grad g: and and they seem to not be able to give enough of them away , so we could probably get more as . professor a: that 's true . so if it seems to be really useful to you guys , we could probably get a donation to me . grad g: , i ' m not . it will again depend on landay , and if he has a student who 's interested , and how much infrastructure we 'll need . , if it 's easy , we can just do it . , but if it requires a lot of our time , we probably wo n't do it . professor d: i a lot of the we 're doing now really is pilot in one sense or another . grad g: , though , the importance marking is a good idea , though . that if people have something in front of them grad g: do it on pilots or laptops . ok , if something 's important everyone clap . professor a: ok . so crosspads , we 're just gon na try it and see what happens . professor a: the note - taking so , that this is gon na be useful . so if we record data i will definitely ask for it . so , i j we should just say this is not we do n't want to put any extra burden on people , but if they happen to generate minutes , could they send it to us ? grad g: . what i was gon na say is that i do n't want to ask people to do something they would n't normally do in a meeting . it 's ver want to keep away from the artificiality . but it definitely if they exist . and then jane 's idea of summarization afterward is not a bad one . , picking out to let you pick out keywords , and , construct queries . grad g: people in the meeting . , just at the end of the meeting , before you go , phd e: people with radio mikes can go into separate rooms and continue recording without hearing each other . that 's the thing . phd e: if we do if we collect four different summaries , we 're gon na get all this weird data about how people perceive things differently . it 's like this is not what we meant to research . professor a: but but again , like the crosspads , i do n't think i would base a lot of on it , professor a: because i know when i see the clock coming near the end of the meeting , i ' m like inching towards the door . professor a: you 're probably not gon na get a lot of people wanting to do this . grad g: , i when you first said do it , spoken , what i was thinking is , then people have to come up and you have to hook them up to the recorder . so , if they 're already here that 's good , but if they 're not already here for i 'd rather do email . i ' m much faster typing than anything else . postdoc h: , i 'd just try , however the least intrusive and quickest way is , and th and closest to the meeting time too , cuz people will start to forget it as soon as they l leave . professor a: . that doing it orally at the end of the meeting is the best time . professor a: do n't because they 're a captive audience . once they leave , forget it . but but i professor a: but , i do n't think that they 'll necessarily you 'll get many people willing to stay . but , if you get even one professor d: that y that you ca n't certainly ca n't require it or people are n't gon na want to do this . but but if there 's some cases where they will , then it would be helpful . postdoc h: and i ' m also wondering , could n't that be included in the data sample so that you could increase the num , the words that are , recognized by a particular individual ? if you could include the person 's meeting and also the person 's summary , maybe that would be , postdoc h: an ad addition to their database . under the same acoustic circumstance , cuz if they just walk next door with their set - up , nothing 's changed , just grad g: it was the projector for a moment . it was like , " what 's going on ? " phd f: the question i had about queries was , so what we 're planning to do is have people look at the summaries and then generate queries ? are are we gon na try and o phd f: so , the question i had is have we given any thought to how we would generate queries automatically given a summary ? , that 's a whole research topic un unto itself , phd b: should n't landay and his group be in charge of figuring out how to do this ? , this is an issue that goes a little bit beyond where we are right now . phd e: someone wants to know when you 're getting picked up . is someone picking you up ? professor a: let 's see , you and i need dis , no , we did the liz talk . professor a: , after . we need to finish this discussion , and you and i need a little time for wrap - up and quad chart . professor d: , we should be able to wind up in another half - hour , you think ? professor d: , we still have n't talked about the action items from here and so on . grad g: , in answer to " is it landay 's problem ? " , he does n't have a student who 's interested right now in doing anything . so he has very little manpower . , there 's very little allocated for him and also he 's pretty focused on user interface . so i do n't think he wants to do information retrieval , query generation , that . professor d: , there 's gon na be these student projects that can do some things but it ca n't be , very deep . u i actually think that , again , just as a bootstrap , if we do have something like summaries , then having the people who are involved in the meetings themselves , who are cooperative and willing to do yet more , come up with queries , could at least give landay an idea of the things that people might want to know . , ye if he does n't know anything about the area , and the people are talking about and , phd b: but the people will just look at the summaries or the minutes and re and back - generate the queries . that 's what i ' m worried about . so you might as just give him the summaries . phd b: so what you want to h to do is , people who were there , who later see , minutes and s put in summary form , which is not gon na be at the same time as the meeting . there 's no way that can happen . are we gon na later go over it and , like , make up some to which these notes would be an answer , or a deeper postdoc h: i ' m also wondering if we could ask the people a question which would be " what was the most interesting thing you got out of this meeting ? " becau - in terms of like informativeness , postdoc h: it might be , that the summary would not in even include what the person thought was the most interesting fact . professor a: but actually i would say that 's a better thing to ask than have them summarize the meeting . postdoc h: because you get , like , the general structure of important points and what the meeting was about . postdoc h: so you get the general structure , the important points of what the meeting was about with the summary . but with the " what 's the most interesting thing you learned ? " , so the fact that , i know that transcriber uses snack is something that was interesting postdoc h: and that dan worked on that . so that was really so , you could ge pick up some of the micro items that would n't even occur as major headings but could be very informative . postdoc h: it would n't be too , cost - intensive either . , it 's like something someone can do pretty easily on the spur of the moment . phd e: cuz you 'll get cuz you 'll get very different answers from everybody , right ? grad g: , maybe one thing we could do is for the meetings we ' ve already done , i we did n't take minutes and we do n't have summaries . but , people could , like , listen to them a little bit and generate some queries . jane does n't need to . i ' m you have that meeting memorized by now . professor a: but actually it would be an easy thing to just go around the room and say what was the most interesting thing you learned , for those pe people willing to stay . postdoc h: and that it would pick up the micro - structure , the some of the little things that would be hidden . professor c: , but when you go around the room you might just get the effect that somebody says something professor c: and then you go around the room and they say " , me too , i agree . " phd e: on the other hand people might try and come up with different ones , right ? they might say " , i was gon na say that one but now i have to think of something else " . grad g: you have the other thing , that they know why we 're doing it . we 'll , we 'll be telling them that the reason we 're trying to do this is to d generate queries in the future , so try to pick things that other people did n't say . professor d: it 's gon na take some thought . it seemed the , interest that i had in this thing initially was , that i the form that you 're doing something else later , and you want to pick up something from this meeting related to the something else . so it 's really the imp the list of what 's important 's in the something else rather than the professor d: and it might be something minor of minor importance to the meeting . , if it was really major , if it 's the thing that really stuck in your head , then you might not need to go back and check on it even . so it 's that you 're trying to find you 're you ' ve now you were n't interested say i said " , i was n't that much interested in dialogue , i ' m more of an acoustics person " . but but thr three months from now if for some reason i get really interested in dialogue , and i ' m " what is what was that part that , mari was saying ? " grad g: , like jim bass says " add a few lines on dialogue in your next perf " professor d: and then i ' m trying to fi , that 's when i look in general when i look things up most , is when it 's something that did n't really stick in my head the first time around and but for some new reason i ' m interested in the old . professor a: and that 's half of what i i would use it for . but i also a lot of times , make , think to myself " this is interesting , i ' ve got ta come back and follow up on it " . so , things that are interesting , i would be , wanting to do a query about . and also , i like the idea of going around the room , because if somebody else thought something was interesting , i 'd want to know about it and then i 'd want to follow up on it . professor d: . that that might get at some of what i was concerned about , being interested in something later that w , i did n't consider to be important the first time , which for me is actually the dominant thing , because if it was really important it tends to stick more than if i did n't , but some new task comes along that makes me want to look up . grad g: by so by going around , you ca n't get of it , right ? w we just need to start somewhere . phd f: the question the question then is h how much bias do we introduce by , introduce by saying , this was important now and , maybe tha something else is important later ? , does it does the bias matter ? , that 's , i , a question for you guys . postdoc h: , and one thing , we 're saying " important " and we 're saying " interesting " . phd f: does building queries based on what 's important now introduce an irreversible bias on being able to do what morgan wants to do later ? professor d: i , i what i keep coming back to in my own mind is that , the soonest we can do it , we need to get up some system so that people who ' ve been involved in the meeting can go back later , even if it 's a poor system in some ways , and ask the questions that they actually want to know . if , if , as soon as we can get that going at any level , then we 'll have a much better handle on what questions people want to ask than in any anything we do before that . but we have to bootstrap somehow , postdoc h: i will say that i chose " interesting " because it includes also " important " in some cases . but , i feel like the summary gets at a different type of information . postdoc h: , and also i it puts a lot of burden on the person to evaluate . , inter " interesting " is non - threatening in professor a: generati generating an interesting summary , no , i in the interest of generating some minutes here , and also moving on to action items and other things , let me just go through the things that i wrote down as being important , that we at least decided on . crosspads we were going to try , if landay can get the , get them to you guys , and see if they 're interesting . and if they are , then we 'll try to get m do it more . getting electronic summary from a note - taking person if they happen to do it anyway . getting just , digital pictures a couple digital pictures of the table and boards to set the context of the meeting . , and then going around the room at the end to just say qu ask people to mention something interesting that they learned . so rather than say the most interesting thing , something interesting , professor a: thing that was discussed . and then the last thing c would be for those people who are willing to stay afterwards and give an oral summary . ok ? does that cover everything we talked about ? that , that we want to do ? postdoc h: a and one and one qualification on the oral summaries . they 'd be s they 'd be separate . they would n't be hearing each other 's summaries . grad g: , that 's like n that 's gon na predominantly end up being whoever takes down the equipment then . postdoc h: and and that would also be that the data would be included in the database . phd e: , there is still this hope that people might actually think of real queries they really want to ask at some point . and that if that ever should happen , then we should try and write them down . professor d: , if we can figure out a way to jimmy a very rough system , say in a year , then , so that in the second and third years we actually have something to phd b: wanted to say one thing about queries . , the level of the query could be , very low - level or very high - level . and it gets fuzzier and fuzzier as you go up , right ? phd b: so you need to have some if you start working with queries , some way of identifying what the , if this is something that requires a one - word answer or it 's one place in the recording versus was there general agreement on this issue of all the people who ha , you can gen you can ask queries that are meaningful for people . , they 're very meaningful cuz they 're very high - level . but they wo n't exist anywhere in the a grad g: . so we 're gon na have to start with keywords and if someone becomes more interested we could work our way up . professor d: if our goal is wizard of oz - ish , we might want to is it that people would really like to know about this data . professor d: and if it 's something that we how to do yet , th great , that 's , research project for year four . grad g: , i was thinking about wizard of oz , but it requires the wizard to know all about the meetings . phd e: get people to ask questions that they def the machine definitely ca n't answer at the moment , grad g: but that neither could anyone else , though , is what , my point is . postdoc h: i was wondering if there might be one s more source of queries which is indicator phrases like " action item " , which could be obtained from the text from the transcript . grad g: right . since we have the transcript . dates maybe . that 's something i always forget . phd b: , probably if you have to sit there at the end of a meeting and say one thing you remember , it 's probably whatever action item was assigned to you . phd b: so , in general , that could be something you could say , right ? i ' m supposed to do this . it it does n't postdoc h: , that 's true . , but then you could prompt them to say , " other than your action item " , whatever . but but the action item would be a way to get , maybe an additional query . professor a: ok - ok . speaking of action items , can we move on to action items ? professor a: or maybe we should until the summary of this until this meeting is transcribed and then we will hav professor d: somewhere up there we had milestones , but i did y did you get enough milestone , from the description things ? professor a: i got . , why do n't you hand me those transparencies so that i remember to take them . eee , professor d: and , there 's detail behind each of those , as much as is needed . so , you just have to let us know . professor a: what i have down for action items is we 're supposed to find out about our human subject , requirements . people are supposed to send me u r for their for web pages , to c and i 'll put together an overall cover . and you 're s professor a: , i need to email adam or jane , about getting the data . who should i email ? grad g: , how quickly do you want it ? my july is really very crowded . and so , professor a: how about if c , right now all i want i personally only want text data . the only thing jeff would do anything with right now but i ' m just speaking fr based on a conversation with him two weeks ago i had in turkey . but all he would want is the digits . , but i 'll just speak for myself . i ' m interested in getting the language model data . , so i ' m just interested in getting transcriptions . so then just email you ? postdoc h: you could email to both of us , just , if you wanted to . , i do n't think either of us would mind recei professor d: in in our phone call , before , we , it turns out the way we 're gon na send the data is by , and , and then what they 're gon na do is take the cd - rom and transfer it to analog tape and give it to a transcription service , that will professor d: and then there will be some things , many things that do n't work out . and that 'll go back to ibm and they 'll , they run their aligner on it and it kicks out things that do n't work , which , the overlaps will certainly be examples of that . , what w we will give them all of it . right ? grad g: so we 'll give them all sixteen channels and they 'll do whatever they want with it . professor d: it 's also wo n't be adding much to the data to give them the mixed . phd f: it 's not right . it does n't it is n't difficult for us to do , phd b: you should that may be all that they want to send off to their transcribers . professor a: ok . related to the conversation with picheny , i need to email him , my shipping address and you need to email them something which you already did . postdoc h: i did . i m emailed them the transcriber url , the on - line , data that adam set up , the url so they can click on an utterance and hear it . and i emailed them the str streamlined conventions which you got a copy of today . professor d: right . and i was gon na m email them the which i have n't yet , a pointer to the web pages that we currently have , cuz in particular they want to see the one with the way the recording room is set up and so on , your page on that . postdoc h: i c - i cc ' ed morgan . i should have sent i should have cc ' ed you as . grad g: not an immediate action item but something we do have to worry about is data formats for higher - level information . grad g: , or d or not even higher level , different level , prosody and all that . we 're gon na have to figure out how we 're gon na annotate that . professor a: w my my u feeling right now on format is you guys have been doing all the work and whatever you want , we 're happy to live with . phd f: here 's a mysterious file buw001edialogueact1785 314998 315022 e phd s^aa^j -1 0 yeah . buw001fdialogueact1784 314997 315025 f phd fh -1 0 and professor d: we also had the , that we were s , that you were gon na get us the eight - hundred number and we 're all gon na we 're gon na call up your communicator thing and we 're gon na be good slash bad , depending on how you define it , users . professor c: now , something that i mentioned earlier to mari and liz is that it 's probably important to get as many non - technical and non - speech people as possible in order to get some realistic users . so if you could ask other people to call and use our system , that 'd be good . cuz we do n't want people who already know how to deal with dialogue systems , professor a: e , it could result in some good bloopers , which is always good for presentations . , anyway professor d: we talked about that we 're getting the recording equipment running at uw . and so it depends , w e they 're , they 're p m if that comes together within the next month , there at least will be , major communications between dan and uw folks professor a: i ' m shooting to try to get it done get it put together by the beginning of august . professor d: but we have it 's pretty we . , he s , he said that it was sitting in some room collecting dust professor a: , and i will email these notes , i ' m not what to do about action items for the data , although , then somebody i somebody needs to tell landay that you want the pads . professor d: i 'll do that . , and he also said something about outside there that came up about the outside text sources , that he may have professor d: some text sources that are close enough to the thing that we can play with them for a language model . phd e: , that was what he was saying was this he this thing that , jason had been working on finds web pages that are thematically related to what you 're talking about . , that 's the idea . so that would be a source of text which is supposedly got the right vocabulary . but it 's very different material . it 's not spoken material , professor a: but but that 's actually what i wanna do . that 's that 's what i wanna work with , is things that s the wrong material but the right da the right source . grad g: un - unfortunately landay told me that jason is not gon na be working on that anymore . he 's switching to other again . professor a: . he seemed when i asked him if he could actually supply data , he seemed a little bit more reluctant . so , i 'll send him email . i 'll put it in an action item that i send him email about it . and if i get something , great . if i do n't get something professor a: , otherwise , if you guys have any papers or i could use , i could use your web pages . that 's what we could do . you ' ve got all the web pages on the meeting recor professor a: i one less action item . use what web pages there are out there on meeting recorders . grad g: , that 's . what his software does is h it picks out keywords and does a google - like search . professor d: there 's there 's some , carnegie mellon , right ? on on meeting recording , phd b: right . and then j there 's th that 's where you would want to eventually be able to have a board or a camera , because of all these classroom grad g: , georgia tech did a very elaborate instrumented room . and i want to try to stay away from that . so professor a: great . that solves that problem . one less action item . ok . that 's good enou that 's all think of . postdoc h: can i ask , one thing ? it relates to data collection and i 'd and we mentioned earlier today , this question of , so , i s i know that from with the near - field mikes some of the problems that come with overlapping speech , are lessened . but i wonder if , is that sufficient or should we consider maybe getting some data gathered in such a way that , u w we would c , p have a meeting with less overlap than would otherwise be the case ? so either by rules of participation , or whatever . now , , it 's true , we were discussing this earlier , that depending on the task so if you ' ve got someone giving a report you 're not gon na have as much overlap . postdoc h: but , i , so we 're gon na have s , non - overlapping samples anyway . but , in a meeting which would otherwise be highly overlapping , is the near - field mike enough or should we have some rules of participation for some of our samples to lessen the overlap ? professor a: i do n't think we should have rules of participation , but we should try to get a variety of meetings . that 's something that if we get the meeting going at uw , that i probably can do more than you guys , cuz you guys are probably mostly going to get icsi people here . but we can get anybody in ee , over and possibly also some cs people , over at uw . so , that there 's a good chance we could get more variety . phd b: they 're still gon na overlap , but mark and others have said that there 's quite a lot of found data from the discourse community that has this characteristic and also the political y , anything that was televised for a third party has the characteristic of not very much overlap . professor d: wasn - but w we were saying before also that the natural language group here had less overlap . so it also depends on the style of the group of people . postdoc h: on the task , and the task . it 's just wanted to , because , it is true people can modify the amount of overlap that they do if they 're asked to . not not entirely modify it , but lessen it if it 's desired . but if that 's sufficient data wanted to be that we will not be having a lot of data which ca n't be processed . professor a: so i ' m just writing here , we 're not gon na try to specify rules of interaction but we 're gon na try to get more variety by i using different groups of people postdoc h: fine . and i , i know that the near f near - field mikes will take care of also the problems to s to a certain degree . professor a: cuz if i recorded some administrative meetings then that may have less overlap , because you might have more overlap when you 're doing something technical and disagreeing or whatever . postdoc h: , as a contributary , so i know that in l in legal depositions people are pr are prevented from overlapping . they 'll just say , you know , " till each person is finished before you say something " . so it is possible to lessen if we wanted to . but but these other factors are fine . wanted to raise the issue . professor a: , the reason why i did n't want to is be why i personally did n't want to is because i wanted it to be as , unintrusive as possi as you could be with these things hanging on you . postdoc h: , that 's always desired . want to be we do n't that we 're able to process , i u , as much data as we can . phd b: so liberman and others were interested in a lot of found data . so there 's lots of recordings that they 're not close - talk mike , and and there 's lots of television , on , political debates and things like that , congre congressional hearings . boring like that . and then the cmu folks and i were on the other side in cuz they had collected a lot of meetings that were like this and said that those are nothing like these meetings . , so there 're really two different kinds of data . and , i we just left it as @ that if there 's found data that can be transformed for use in speech recognition easily , then we would do it , but newly collected data would be natural meetings . professor d: actually , th @ the cmu folk have collected a lot of data . is that is that going to be publicly available , phd b: if people were interested they could talk to them , but i got the feeling there was some politics involved . phd b: but they had multiple mikes and they did do recognition , and they did do real conversations . but as far as i know they did n't offer that data to the community at this meeting . but that could change cuz mark , mark 's really into this . we should keep in touch with him . professor d: , once we send out , we still have n't sent out the first note saying " hey , this list exists " . but but , once we do that professor d: . it 's on i already added that one on my board to do that . hopefully everybody here is on that list . we should at least check that everybody here ? grad g: i added a few people who did n't who i knew had to be on it even though they did n't tell me . postdoc h: , i am . so , i w , just for clarification . so " found data " , they mean like established corpora of linguistics and other fields , right ? phd b: , " found " has , also the meaning that 's it very natural . it 's things occur without any , the pe these people were n't wearing close - talking mikes , but they were recorded anyway , like the congressional hearings and , for legal purposes or whatever . postdoc h: but it includes like standard corpora that have been used for years in linguistics and other fields . phd b: they did n't have to collect it . it 's not " found " in the sense that at the time it was collected for the purpose . phd b: but what he means is that , mark was really a fan of getting as much data as possible from , reams and reams of , of broadcast , phd b: web , tv , radio . but he understands that 's very different than these this type of meeting . phd e: so we should go around and s we should go around and say something interesting that happened at the meeting ? grad g: so , i really liked the idea of what was interesting was the combination of the crosspad and the speech . especially , the interaction of them rather than just note - taking . so , can you determine the interesting points by who 's writing ? can you do special gestures and so on that have , special meaning to the corpora ? i really liked that . postdoc h: , realized there 's another category of interesting things which is that , i found this discussion very , i this question of how you get at queries really interesting . and and the and i and the fact that it 's , nebulous , what that what query it would be because it depends on what your purpose is . so i actually found that whole process of trying to think of what that would involve to be interesting . but that 's not really a specific fact . thought we went around a discussion of the factors involved there , which was worthwhile . phd e: i had a real revelation about taking pictures . i why i did n't do this before and i regret it . so that was very interesting for me . phd e: not that i the boards are n't really related to this meeting . , i will take pictures of them , phd f: i ' m gon na pass because i ca n't , of the jane took my answer . phd f: , so i ' m gon na pass for the moment but y come back to me . professor a: i think " pass " is socially acceptable . but i will say , i will actually , a spin on different slightly different spin on what you said , this issue of , realizing that we could take minutes , and that actually may be a goal . so that may be the test in a sense , test data , the template of what we want to test against , generating a summary . so that 's an interesting new twist on what we can do with this data . professor c: i agree with jane and eric . the question of how to generate queries automatically was the most interesting question that came up , and it 's something that , as you said , is a whole research topic in itself , so i do n't think we 'll be able to do anything on it because we do n't have funding on it , in this project . but , it 's definitely something i would want to do something on . professor d: , being more management lately than research , the thing that impressed me most was the people dynamics and not any of the facts . that is , i really enjoyed hanging out with this group of people today . so that 's what really impressed me . grad g: that actually has come up a couple times in queries . i was talking to landay and that was one of his examples . when when did people laugh ? postdoc h: h do we need do i need to turn something off here , or i do unplug this , or ?
the discussion concerned mainly ideas about data collection and the nature and generation of queries on meetings. meeting notes taken by participants as standard minutes or summaries , or on devices like crosspads can provide useful information. there is also interest in the speech community for fusion of speech with visual data. taking some photos of the whiteboard and the positioning of participants is easy enough to do. another option would be the recording by participants of short oral summaries of the meeting. summaries could be used to bootstrap for queries , the exact nature of which remained nebulous. candidate types are keyword searches , action items , elaboration on points of interest , and agreement between participants. an initial prototype system to test any hypotheses can be pipelined. the recorded data will be stored on cd-rom's and sent to ibm for transcription. there is also work being done on the annotation of prosody. the corpus could be enriched with found data ( public or collected by other projects ) , if those prove appropriate for use in the project. finally , project web pages and mailing list are being set up and uw are going to investigate the suitability of their recording equipment. within the piloting of data collection ideas , it was decided that crosspads are going to be used for detection of "hot points" during the meeting. other ideas to be tested are the use of summaries or minutes -if a group normally produce them- and the recording of oral summaries by individual participants after the meeting. photographing the contents of the board and the positions of the meeting participants will provide extra information. for query generation purposes , all participants will also be asked for their highlight of the meeting. the general goal regarding the corpus is to investigate the acquisition of further appropriate data through public sources or available collections of other institutions. as to recordings at icsi , the group agreed that imposing rules of participation in order to avoid speaker overlaps was not desirable. instead , they will aim for collecting stylistically varied data ( different group dynamics and types of meeting ). further action will also be taken to close other pending issues: the web pages will be organised , the recording room will be finalised and uw will also test their recording infrastructure. the recording of meetings and any possible additional tasks must be set up in a user-friendly way , otherwise it would be difficult to recruit volunteers. asking participants to do more than they normally would in a meeting could put people off. the video-recording of meetings , apart from adding an extra level of instrumentation complexity , can also make people apprehensive. the usability and usefulness of crosspads is not certain. acquiring data from other sources will not be straightforward , as they may either not be suitable for this project or not publicly available. querying is also a major issue: what users would ask from a system is not clear. how this system would resolve high-level queries ( eg regarding agreement between participants ) is also hard to tell at this stage. as transcription has not started yet , there was concern as to how ibm will deal with multi-channel data. the abundance of speaker overlaps may also affect the quality of the trascription. however , it was accepted that some problems with the transcription of jargon are , to an extent , unavoidable. five hours of recorded pilot data are already available. ibm , who will be carrying out the transcribing , have been emailed the url's for the online data set-up and for the transcribing tool. some work has already been done with the annotation of speech features like prosody.
###dialogue: grad g: no . , i do n't think they 're designed to be over your ears . phd b: , i know . it just it really hurts . it gives you a headache , like if you on your temple professor d: so are we recording now ? is this ! we 're we 're live . ok . so , what were we gon na talk about again ? so we said data collection , which we 're doing . professor a: ok . do we do th do you go around the room and do names or anything ? grad g: u usually we ' ve done that and also we ' ve s done digits as , but i forgot to print any out . so . besides with this big a group , phd e: y your thing { nonvocalsound } may be pointing in a funny direction . it 's it helps if it points upwards . professor d: , the element , n should be as close to you your mouth as possible . professor d: alright . so what we had was that we were gon na talk about data collection , and , , you put up there data format , and other tasks during data collection , professor a: so , the goal was what can we do how can you do the data collection differently to get professor a: what can you add to it to get , some information that would be helpful for the user - interface design ? like professor a: especially for querying . so , getting people to do queries afterwards , getting people to do summaries afterwards . postdoc h: , one thing that came up in the morning was the , i , if he i , if he has s i do n't remember , mister lan - doctor landry ? postdoc h: la - landay ? so he has , these , , tsk note - taking things , postdoc h: then that would be a summary which you would n't have to solicit . y if we were able to do that . professor a: , if you actually take notes as a summary as opposed to n take notes in the sense of taking advantage of the time - stamps . so action item or , reminder to send this to so - and - so , blah - blah . professor a: so that would n't be a summary . that would just be that would b relate to the query side . grad g: but if we had the crosspads , we could ask people , if something comes up write it down and mark it somehow , phd e: right . , we because you 'd have several people with these pads , you could collect different things . phd f: , the down - side to that is that he indicated that the , quality of the handwriting recognition was quite poor . grad g: but that 's alright . i do n't think there 'd be so many that you could n't have someone clean it up pretty easily . professor a: . we also could come up with some code for things that people want to do so that for frequent things . and the other things , people can write whatever they want . , it 's to some extent , for his benefit . so , if that , if we just keep it simple then maybe it 's still useful . professor d: realized we skipped the part that we were saying we were gon na do at the front where we each said who we were . professor d: no , not a no , my mind went elsewhere . so , , i ' m morgan , where am i ? i ' m on channel three . postdoc h: should we have used pseudo - names ? should we do it a second time with pseudo no . grad g: , and eventually once this room gets a little more organized , the jimlets will be mounted under the table , and these guys will be permanently mounted somehow . , probably with double - sided tape , but so . you so we wo n't have to go through that . postdoc h: i have a question on protocol in these meetings , which is when you say " jimlet " and the person listening wo n't that is , sh shou how how do we get is that important information ? , the jimlet , the box that contains the professor d: , suppose we broaden out and go to a range of meetings besides just these internal ones . there 's gon na be lots of things that any group of people who know each other have in column common that we will not know . professor d: , we were originally gon na do this with vlsi design , and the reason we did n't go straight to that was because immediately ninety percent of what we heard would be jargon to us . so . professor a: , u so , actually there 's three issues . there 's the crosspad issue . should we do it and , if so , what 'll we have them do ? , do we have s people write summaries ? everybody or one person ? and then , do we ask people for how they would query things ? is that phd f: there 's there 're sub - problems in that , in that where or when do you actually ask them about that ? , that was one thing i was thinking about was is that dan said earlier that , maybe two weeks later , which is when you would want to query these things , you might ask them then . but there 's a problem with that in that if you 're not if you do n't have an interactive system , it 's gon na be hard to go beyond the first level of question . and furth i d explore the data further . so . professor d: which is , we certainly do want to branch out beyond , recording meetings about meeting recorder . and , once we get out beyond our little group , the people 's motivation factor , reduces enormously . and if we start giving them a bunch of other things to do , how , we did n another meeting here for another group and , they were fine with it . but if we 'd said , " ok , now all eight of you have to come up with , the summar " grad g: i did n't follow up either . so i did n't track them down and say " do th do it now " . but , no one spontaneously provided anything . professor d: i ' m worried that if you did even if you did push them into it , it might be semi - random , as opposed to what you 'd really want to know if you were gon na use this thing . grad g: how else to generate the queries other than getting an expert to actually listen to the meeting and say " that 's important , that might be a query " . postdoc h: tsk . , there is this other thing which y which you were alluding to earlier , which is , there are certain key words like , " action item " and things like that , which could be used in , t to some degree finding the structure . postdoc h: and and i also , was thinking , with reference to the n , note - taking , the advantage there is that you get structure without the person having to do something artificial later . and the fir third thing i wanted to say is the summaries afterwards , they should be recorded instead of written because that , it would take so long for people to write that you would n't get as good a summary . professor a: how about this idea ? that normally at most meetings somebody is delegated to be a note - taker . grad g: , that gives you a summary but it does n't really how do you generate queries from that ? phd e: . but , maybe a summary is one of the things we 'd want from the output of the system . right ? , they 're something . it 's a output you 'd like . grad g: , james and i were talking about this during one of the breaks . and the problem with that is , i ' m definitely going to do something with information retrieval even if it 's not full - bore what i ' m gon na do for my thesis . i ' m gon na do something . i ' m not gon na do anything with summarization . and so if someone wants to do that , that 's fine , but it 's not gon na be me . professor d: , that we , the f the core thing is that once we get some of these issues nailed down , we need to do a bunch of recordings and send them off to ibm and get a bunch of transcriptions even if they 're slightly flawed or need some other and then we 'll have some data there . and then , i we can start l looking and thinking , what do we want to know about these things and at the very least . phd b: i actually want to say something about the note pad . so , if you could sense just when people are writing , and you tell them not to doodle , or try not to be using that for other purposes , and each person has a note pad . they just get it when they come in the room . then you c you can just have a fff plot of wh , who 's writing when . that 's all you phd b: and , you can also have notes of the meeting . but i bet that 's that will allow you to go into the hot places where people are writing things down . phd b: , you can tell when you 're in a meeting when everybody stops to write something down that something was just said . it may not be kept in the later summary , but at that point in time is was something that was important . and that would n't take any extra professor d: that 's a good idea but that does n't maybe i ' m missing something , but that does n't get to the question of how we come up with queries , right ? phd b: , then you can go to the points where the you could actually go to those points in time and find out what they were talking about . and you r professor a: it 's an interesting thing . i do n't think it gets at the queries per - se , but it does give us an information fusion thing that , you wanna i say " what were the hot - points of the meeting ? " professor d: that that 's what , is that it gets at something interesting but if we were asking the question , which we were , of , " how do we figure out what 's the nature of the queries that people are gon na want to ask of such a system ? " , knowing what 's important does n't tell you what people are going to be asking . professor a: you could say they 're gon na ask about , when did so - and - so s talk about blah . and at least that gives you the word that they might run a query on . phd b: we do n't even , if you want to find out what any user will use , that might be true for one domain and one user , but a different domain and a different user professor d: , but we 're just looking for a place to start with that because , th what james is gon na be doing is looking at the user - interface and he 's looking at the query in i we we have five hours of pilot data of the other but we have zero hours of queries . so he 's just going " where do i start ? " professor a: w , th you could do the summaries actually may help get us there , for a couple reasons . one , if you have a summary if you have a bunch of summaries , you can do a word frequency count and see what words come up in different types of meetings . so " action item " is gon na come up whether it 's a vlsi meeting , or speech meeting , or whatever . so words that come up in different types of meetings may be something that you would want to query about . professor a: , the second thing you could possibly do with it is just run a little pilot experiment with somebody saying " here 's a summary of a meeting , what questions might you want to ask about it to go back ? " grad g: , that 's difficult because then they 're not gon na ask the questions that are in the summary . but , it would give professor a: that 's one possi one possible scenario , though , is you have the summary , and you want to ask questions to get more detail . grad g: th , it has to be a participant . , it does n't have to be . ok . so that is another use of meeting recorder that we have n't really talked about , which is for someone else , as opposed to as a remembrance agent , which is what had been my primary thought in the information retrieval part of it would be . but , i if you had a meeting participant , they could use the summary to refresh themselves about the meeting and then make up queries . but it 's not i how to do it if until you have a system . phd e: but th there is this , there is this class of queries , which are the things that you did n't realize were important at the time but some in retrospect you think " , hang on , did n't we talk about that ? " and it 's something that did n't appear in the summary but you and that 's what this , complete data capture is nicest for . phd e: cuz it 's the things that you would n't have bothered to make an effort to record but they get recorded . so , and th there 's no way of generating those , u until we just until they actually occur . phd e: , it 's like right , right . exactly . but , it 's difficult to say " and if i was gon na ask four questions about this , what would they be ? " those are n't the things that come up . postdoc h: i also think that w if you can use the summaries as an indication of the important points of the meeting , then you might get something like y so if th if the obscure item you want to know more about was some form of data collection , maybe the summary would say , " we discussed types of na data collection " . and , and and maybe you could get to it by that . if you if you had the larger structure of the discourse , then if you can categorize what it is that you 're looking for with reference to those l those larger headings , then you can find it even if you do n't have a direct route to that . grad g: mmm . although it seems like that 's , a high burden on the note - taker . grad g: that 's a pretty fine grain that the note - taker will have to take . professor a: i th no . you got to have somebody who knows the pro knows the topic or , whose job it is delegated to be the note - taker . phd b: no , but someone who can come sit in on the meetings and then takes the notes with them that the real note - taker phd b: and that way that one student has , a rough idea of what was going on , and they can use it for their research . , this is n't really necessarily what you would do in a real system , because that 's a lot of trouble and maybe it 's not the best way to do it . but if he has some students that want to study that then they should get to know the people and attend those meetings , and get the notes from the note - taker . grad g: right . , that 's a little bit of a problem . their note - taking application they ' ve been doing for the last couple of years , and i do n't think anyone is still working on it . they 're done . , so i ' m not that they have anyone currently working on notes . so what we 'd have to interest someone in is the combination of note and speech . grad g: and so the question is " is there such a person ? " and right now , the answer is " no " . professor d: i ' ve been thinking about it a little bit here about the , th this , e that the now i ' m thinking that the summary a summary , is actually a reasonable , bootstrap into this into what we 'd like to get at . it 's it 's not ideal , but we , we have to get started someplace . so i was just thinking about , suppose we wanted to get w we have this collection of meeting . we have five hours of . , we get that transcribed . so now we have five hours of meetings and , you ask me , " morgan , what d , what questions do you want to ask ? " , i would n't have any idea what questions i want to ask . i 'd have to get started someplace . so if i looked at summary of it , i 'd go " , i was in that meeting , i remember that , what was the part that " and and th that might then help me to think of things even things that are n't listed in the summary , but just as a refresh of what the general thing was going on in the meeting . professor a: it serves two purpo purposes . one , as a refresh to help bootstrap queries , but also , maybe we do want to generate summaries . and then it 's , it 's a key . professor a: this one being different , but in most meetings that i attend , there 's somebody t explicitly taking notes , frequently on a laptop , you can just make it be on a laptop , so then yo you 're dealing with ascii and not somebody you do n't have to go through handwriting recognition . , and then they post - edit it into , a summary and they email it out for minutes . , that happens in most meetings . postdoc h: i that , there 's we 're using " summary " in two different ways . so what you just described i would describe as " minutes " . postdoc h: and what i originally thought was , if you asked someone " what was the meeting about ? " postdoc h: and then they would say " , we talked about this and then we talked about that , and so - and - so talked about " and then you 'd have , like i e my thought was to have multiple people summarize it , on recording rather than writing because writing takes time and you get irrelevant other things that u take time , that whereas if you just say it immediately after the meeting , a two - minute summary of what the meeting was about , you would get , with mult see , i also worry about having a single note - taker because that 's just one person 's perception . and , , it 's releva it 's relative to what you 're focus was on that meeting , and people have different major topics that they 're interested in . postdoc h: so , my proposal would be that it may be worth considering both of those types , the note - taking and a spontaneous oral summary afterwards , no longer than two minutes , professor d: you can correct me on this , but , my impression was that , , true that the meetings here , nobody sits with a w , with a laptop phd e: i ' ve d when we when we have other meetings . when i have meetings on the european projects , we have someone taking notes . professor d: right ? where you ' ve got fifteen peo , most th this is one of the larger meetings . most of the meetings we have are four or five people professor d: and you 're not you do n't have somebody sitting and taking minutes for it . you just get together and talk about where you are . professor a: so , it depends on whether it 's a business meeting or a technical discussion . phd b: i how , but , the outline is up here and that 's what people are seeing . and if you have a or you shou could tell people not to use the boards . but there 's this missing information otherwise . phd e: just to take pictures of who 's there , where the microphones are , and then we could also put in what 's on the board . , like three or four snaps for every phd b: people who were never at the meeting will have a very hard time understanding it otherwise . phd e: , no . , think , that right now we do n't make a record of where people are sitting on the tables . and that the at some point that might be awfully useful . phd e: not not as part of the not as a part of the data that you have to recover . phd b: someone later might be able to take these and say " ok , they , at least these are the people who were there professor d: u , liz , you sa you sat in on the , subcommittee meeting or whatever professor d: , on you on the subcommittee meeting for at the , that workshop we were at that , mark liberman was having . so i was n't there . they they h must have had some discussion about video and the visual aspect , and all that . phd b: big , big interest . huge . , it personally , i do n't i would never want to deal with it . but i ' m just saying first of all there 's a whole bunch of fusion issues that darpa 's interested in . , fusing gesture and face recognition , even lip movement and things like that , for this task . and there 's also a personal interest on the part of mark liberman in this in storing these images in any data we collect so that later we can do other things with it . professor d: , you , that the key thing there is that this is a description of database collection effort that they 're talking about doing . and if the database exists and includes some visual information that does n't mean that an individual researcher is going to make any use of it . grad g: but that it 's gon na be a lot of effort on our part to create it , and store it , and get all the standards , and to do anything with it . professor d: so we 're gon na do what we 're gon na do , whatever 's reasonable for us . phd b: like i know with atis , we just had a tape recorder running all the time . and later on it turned out it was really good that you had a tape recorder of what was happening , even though you w you just got the speech from the machine . so if you can find some really , low , perplexity , professor d: , minimally , what dan is referring to at least having some representation of the p the spatial position of the people , cuz we are interested in some spatial processing . and so , grad g: , once the room is a little more fixed that 's a little easier cuz you 'll , the wireless . but phd b: also cmu has been doing this and they were the most vocal at this meeting , alex waibel 's group . and they have said , i talked to the student who had done this , that with two fairly inexpensive cameras they just recorded all the time and were able to get all the information from or maybe it was three from all the parts of the room . so we would be we might lose the chance to use this data for somebody later who wants to do some processing on it if we do n't collect it . grad g: i do n't disagree . that if you have that , then people who are interested in vision can use this database . the problem with it is you 'll have more people who do n't want to be filmed than who do n't want to be recorded . grad g: so that there 's going to be another group of people who are gon na say " i wo n't participate " . phd b: or you could put a paper bag over everybody 's head and not look at each other and not look at boards , and just all be sitting talking . that would be an interes bu postdoc h: , there 's that 'd be the parallel , . but y she 's we 're just proposing a minimal preservation of things on boards , postdoc h: sp spatial organization and you could anonymize the faces for that matter . , this is phd b: they have a pretty crude set - up . and they had they just turn on these cameras . they were they were not moving or anything . phd b: and they did n't actually align it or anything . they just they have it , though . postdoc h: , it 's worth considering . maybe we do n't want to spend that much more time discussing it , professor d: we might try that some and we certainly already have some recordings that do n't have that , which , we 'll get other value out of , . postdoc h: th , if it 's easy to collect it th then it 's a wise thing to do because once it 's gone . and phd b: i ' m just the community if ldc collects this data u , and l - if mark liberman is a strong proponent of how they collect it and what they collect , there will probably be some video data in there . phd b: and so that could argue for us not doing it or it could argue for us doing it . the only place where it overlaps is when some of the summarization issues are actually could be , easier made easier if you had the video . professor d: at the moment we should be determining this on the basis of our own , interests and needs rather than hypothetical ones from a community thing . as you say , if they decide it 's really critical then they will collect a lot more data than we can afford to , and will include all that . professor d: i ' m not worried about the cost of setting it up . i ' m worried about the cost of people looking at it . in other words , it 's it 'd be silly to collect it all and not look at it . and so i that we do have to do some picking and choosing of the that we 're doing . but i am int i do think that we m minimally want something we might want to look at some , subsets of that . like for a meeting like this , at least , take a polaroid of the of the boards , phd b: of the board . or at least make that the note - taker takes a sh , a snapshot of the board . phd b: that 'll make it a lot easier for meetings that are structured . , otherwise later on if nobody wrote this on the board down we 'd have a harder time summarizing it or agreeing on a summary . postdoc h: we and it especially since this is common knowledge . , this is shared knowledge among all the participants , and it 's a shame to keep it off the recording . grad g: er , if we were n't recording this , this would get lost . right ? grad g: that we 're not saving it anyway . right ? in in our real - life setting . professor a: what do you mean we 're not saving it anyway ? i ' ve written all of this down and it 's getting emailed to you . phd b: right . that would be the other alternative , to make that anything that was on the board , is in the record . professor a: , that 's why i ' m saying that the note - taking would be in many for many meetings there will be some note - taking , in which case , that 's a useful thing to have , we do n't need to require it . just like the , it would be great if we try to get a picture with every meeting . , so we wo n't worry about requiring these things , but the more things that we can get it for , the more useful it will be for various applications . so . professor d: so , departing for the moment from the data collection question but actually talking about , this group and what we actually want to do , so i that 's th the way what you were figuring on doing was , putting together some notes and sending them to everybody from today ? professor d: so so the question that we started with was whether there was anything else we should do during th during the collection . professor d: and i the crosspads was certainly one idea , and we 'll get them from him and we 'll just do that . and then the next thing we talked about was the summaries and are we gon na do anything about that . professor a: , before we leave the crosspads and call it done . so , if i ' m collecting data then there is this question of do i use crosspads ? so , that if we really have me collect data and i ca n't use crosspads , it 's probably less useful for you guys to go to the trouble of using it , unless you think that the crosspads are gon na n i ' m not what they 're gon na do . but but having a small percentage of the data with it , i ' m not whether that 's useful or not . maybe maybe it 's no big deal . professor d: i the point was to try again , to try to collect more information that could be useful later for the ui . so it 's landay supplying it so that landay 's can be easier to do . so it right now he 's g operating from zero , professor d: and so even if we did n't get it done from uw , it seems like that would could still you shou , at least try it . phd b: it 'd be useful to have a small amount of it just as a proof of concept . it will , what you can do with things . grad g: and and they seem to not be able to give enough of them away , so we could probably get more as . professor a: that 's true . so if it seems to be really useful to you guys , we could probably get a donation to me . grad g: , i ' m not . it will again depend on landay , and if he has a student who 's interested , and how much infrastructure we 'll need . , if it 's easy , we can just do it . , but if it requires a lot of our time , we probably wo n't do it . professor d: i a lot of the we 're doing now really is pilot in one sense or another . grad g: , though , the importance marking is a good idea , though . that if people have something in front of them grad g: do it on pilots or laptops . ok , if something 's important everyone clap . professor a: ok . so crosspads , we 're just gon na try it and see what happens . professor a: the note - taking so , that this is gon na be useful . so if we record data i will definitely ask for it . so , i j we should just say this is not we do n't want to put any extra burden on people , but if they happen to generate minutes , could they send it to us ? grad g: . what i was gon na say is that i do n't want to ask people to do something they would n't normally do in a meeting . it 's ver want to keep away from the artificiality . but it definitely if they exist . and then jane 's idea of summarization afterward is not a bad one . , picking out to let you pick out keywords , and , construct queries . grad g: people in the meeting . , just at the end of the meeting , before you go , phd e: people with radio mikes can go into separate rooms and continue recording without hearing each other . that 's the thing . phd e: if we do if we collect four different summaries , we 're gon na get all this weird data about how people perceive things differently . it 's like this is not what we meant to research . professor a: but but again , like the crosspads , i do n't think i would base a lot of on it , professor a: because i know when i see the clock coming near the end of the meeting , i ' m like inching towards the door . professor a: you 're probably not gon na get a lot of people wanting to do this . grad g: , i when you first said do it , spoken , what i was thinking is , then people have to come up and you have to hook them up to the recorder . so , if they 're already here that 's good , but if they 're not already here for i 'd rather do email . i ' m much faster typing than anything else . postdoc h: , i 'd just try , however the least intrusive and quickest way is , and th and closest to the meeting time too , cuz people will start to forget it as soon as they l leave . professor a: . that doing it orally at the end of the meeting is the best time . professor a: do n't because they 're a captive audience . once they leave , forget it . but but i professor a: but , i do n't think that they 'll necessarily you 'll get many people willing to stay . but , if you get even one professor d: that y that you ca n't certainly ca n't require it or people are n't gon na want to do this . but but if there 's some cases where they will , then it would be helpful . postdoc h: and i ' m also wondering , could n't that be included in the data sample so that you could increase the num , the words that are , recognized by a particular individual ? if you could include the person 's meeting and also the person 's summary , maybe that would be , postdoc h: an ad addition to their database . under the same acoustic circumstance , cuz if they just walk next door with their set - up , nothing 's changed , just grad g: it was the projector for a moment . it was like , " what 's going on ? " phd f: the question i had about queries was , so what we 're planning to do is have people look at the summaries and then generate queries ? are are we gon na try and o phd f: so , the question i had is have we given any thought to how we would generate queries automatically given a summary ? , that 's a whole research topic un unto itself , phd b: should n't landay and his group be in charge of figuring out how to do this ? , this is an issue that goes a little bit beyond where we are right now . phd e: someone wants to know when you 're getting picked up . is someone picking you up ? professor a: let 's see , you and i need dis , no , we did the liz talk . professor a: , after . we need to finish this discussion , and you and i need a little time for wrap - up and quad chart . professor d: , we should be able to wind up in another half - hour , you think ? professor d: , we still have n't talked about the action items from here and so on . grad g: , in answer to " is it landay 's problem ? " , he does n't have a student who 's interested right now in doing anything . so he has very little manpower . , there 's very little allocated for him and also he 's pretty focused on user interface . so i do n't think he wants to do information retrieval , query generation , that . professor d: , there 's gon na be these student projects that can do some things but it ca n't be , very deep . u i actually think that , again , just as a bootstrap , if we do have something like summaries , then having the people who are involved in the meetings themselves , who are cooperative and willing to do yet more , come up with queries , could at least give landay an idea of the things that people might want to know . , ye if he does n't know anything about the area , and the people are talking about and , phd b: but the people will just look at the summaries or the minutes and re and back - generate the queries . that 's what i ' m worried about . so you might as just give him the summaries . phd b: so what you want to h to do is , people who were there , who later see , minutes and s put in summary form , which is not gon na be at the same time as the meeting . there 's no way that can happen . are we gon na later go over it and , like , make up some to which these notes would be an answer , or a deeper postdoc h: i ' m also wondering if we could ask the people a question which would be " what was the most interesting thing you got out of this meeting ? " becau - in terms of like informativeness , postdoc h: it might be , that the summary would not in even include what the person thought was the most interesting fact . professor a: but actually i would say that 's a better thing to ask than have them summarize the meeting . postdoc h: because you get , like , the general structure of important points and what the meeting was about . postdoc h: so you get the general structure , the important points of what the meeting was about with the summary . but with the " what 's the most interesting thing you learned ? " , so the fact that , i know that transcriber uses snack is something that was interesting postdoc h: and that dan worked on that . so that was really so , you could ge pick up some of the micro items that would n't even occur as major headings but could be very informative . postdoc h: it would n't be too , cost - intensive either . , it 's like something someone can do pretty easily on the spur of the moment . phd e: cuz you 'll get cuz you 'll get very different answers from everybody , right ? grad g: , maybe one thing we could do is for the meetings we ' ve already done , i we did n't take minutes and we do n't have summaries . but , people could , like , listen to them a little bit and generate some queries . jane does n't need to . i ' m you have that meeting memorized by now . professor a: but actually it would be an easy thing to just go around the room and say what was the most interesting thing you learned , for those pe people willing to stay . postdoc h: and that it would pick up the micro - structure , the some of the little things that would be hidden . professor c: , but when you go around the room you might just get the effect that somebody says something professor c: and then you go around the room and they say " , me too , i agree . " phd e: on the other hand people might try and come up with different ones , right ? they might say " , i was gon na say that one but now i have to think of something else " . grad g: you have the other thing , that they know why we 're doing it . we 'll , we 'll be telling them that the reason we 're trying to do this is to d generate queries in the future , so try to pick things that other people did n't say . professor d: it 's gon na take some thought . it seemed the , interest that i had in this thing initially was , that i the form that you 're doing something else later , and you want to pick up something from this meeting related to the something else . so it 's really the imp the list of what 's important 's in the something else rather than the professor d: and it might be something minor of minor importance to the meeting . , if it was really major , if it 's the thing that really stuck in your head , then you might not need to go back and check on it even . so it 's that you 're trying to find you 're you ' ve now you were n't interested say i said " , i was n't that much interested in dialogue , i ' m more of an acoustics person " . but but thr three months from now if for some reason i get really interested in dialogue , and i ' m " what is what was that part that , mari was saying ? " grad g: , like jim bass says " add a few lines on dialogue in your next perf " professor d: and then i ' m trying to fi , that 's when i look in general when i look things up most , is when it 's something that did n't really stick in my head the first time around and but for some new reason i ' m interested in the old . professor a: and that 's half of what i i would use it for . but i also a lot of times , make , think to myself " this is interesting , i ' ve got ta come back and follow up on it " . so , things that are interesting , i would be , wanting to do a query about . and also , i like the idea of going around the room , because if somebody else thought something was interesting , i 'd want to know about it and then i 'd want to follow up on it . professor d: . that that might get at some of what i was concerned about , being interested in something later that w , i did n't consider to be important the first time , which for me is actually the dominant thing , because if it was really important it tends to stick more than if i did n't , but some new task comes along that makes me want to look up . grad g: by so by going around , you ca n't get of it , right ? w we just need to start somewhere . phd f: the question the question then is h how much bias do we introduce by , introduce by saying , this was important now and , maybe tha something else is important later ? , does it does the bias matter ? , that 's , i , a question for you guys . postdoc h: , and one thing , we 're saying " important " and we 're saying " interesting " . phd f: does building queries based on what 's important now introduce an irreversible bias on being able to do what morgan wants to do later ? professor d: i , i what i keep coming back to in my own mind is that , the soonest we can do it , we need to get up some system so that people who ' ve been involved in the meeting can go back later , even if it 's a poor system in some ways , and ask the questions that they actually want to know . if , if , as soon as we can get that going at any level , then we 'll have a much better handle on what questions people want to ask than in any anything we do before that . but we have to bootstrap somehow , postdoc h: i will say that i chose " interesting " because it includes also " important " in some cases . but , i feel like the summary gets at a different type of information . postdoc h: , and also i it puts a lot of burden on the person to evaluate . , inter " interesting " is non - threatening in professor a: generati generating an interesting summary , no , i in the interest of generating some minutes here , and also moving on to action items and other things , let me just go through the things that i wrote down as being important , that we at least decided on . crosspads we were going to try , if landay can get the , get them to you guys , and see if they 're interesting . and if they are , then we 'll try to get m do it more . getting electronic summary from a note - taking person if they happen to do it anyway . getting just , digital pictures a couple digital pictures of the table and boards to set the context of the meeting . , and then going around the room at the end to just say qu ask people to mention something interesting that they learned . so rather than say the most interesting thing , something interesting , professor a: thing that was discussed . and then the last thing c would be for those people who are willing to stay afterwards and give an oral summary . ok ? does that cover everything we talked about ? that , that we want to do ? postdoc h: a and one and one qualification on the oral summaries . they 'd be s they 'd be separate . they would n't be hearing each other 's summaries . grad g: , that 's like n that 's gon na predominantly end up being whoever takes down the equipment then . postdoc h: and and that would also be that the data would be included in the database . phd e: , there is still this hope that people might actually think of real queries they really want to ask at some point . and that if that ever should happen , then we should try and write them down . professor d: , if we can figure out a way to jimmy a very rough system , say in a year , then , so that in the second and third years we actually have something to phd b: wanted to say one thing about queries . , the level of the query could be , very low - level or very high - level . and it gets fuzzier and fuzzier as you go up , right ? phd b: so you need to have some if you start working with queries , some way of identifying what the , if this is something that requires a one - word answer or it 's one place in the recording versus was there general agreement on this issue of all the people who ha , you can gen you can ask queries that are meaningful for people . , they 're very meaningful cuz they 're very high - level . but they wo n't exist anywhere in the a grad g: . so we 're gon na have to start with keywords and if someone becomes more interested we could work our way up . professor d: if our goal is wizard of oz - ish , we might want to is it that people would really like to know about this data . professor d: and if it 's something that we how to do yet , th great , that 's , research project for year four . grad g: , i was thinking about wizard of oz , but it requires the wizard to know all about the meetings . phd e: get people to ask questions that they def the machine definitely ca n't answer at the moment , grad g: but that neither could anyone else , though , is what , my point is . postdoc h: i was wondering if there might be one s more source of queries which is indicator phrases like " action item " , which could be obtained from the text from the transcript . grad g: right . since we have the transcript . dates maybe . that 's something i always forget . phd b: , probably if you have to sit there at the end of a meeting and say one thing you remember , it 's probably whatever action item was assigned to you . phd b: so , in general , that could be something you could say , right ? i ' m supposed to do this . it it does n't postdoc h: , that 's true . , but then you could prompt them to say , " other than your action item " , whatever . but but the action item would be a way to get , maybe an additional query . professor a: ok - ok . speaking of action items , can we move on to action items ? professor a: or maybe we should until the summary of this until this meeting is transcribed and then we will hav professor d: somewhere up there we had milestones , but i did y did you get enough milestone , from the description things ? professor a: i got . , why do n't you hand me those transparencies so that i remember to take them . eee , professor d: and , there 's detail behind each of those , as much as is needed . so , you just have to let us know . professor a: what i have down for action items is we 're supposed to find out about our human subject , requirements . people are supposed to send me u r for their for web pages , to c and i 'll put together an overall cover . and you 're s professor a: , i need to email adam or jane , about getting the data . who should i email ? grad g: , how quickly do you want it ? my july is really very crowded . and so , professor a: how about if c , right now all i want i personally only want text data . the only thing jeff would do anything with right now but i ' m just speaking fr based on a conversation with him two weeks ago i had in turkey . but all he would want is the digits . , but i 'll just speak for myself . i ' m interested in getting the language model data . , so i ' m just interested in getting transcriptions . so then just email you ? postdoc h: you could email to both of us , just , if you wanted to . , i do n't think either of us would mind recei professor d: in in our phone call , before , we , it turns out the way we 're gon na send the data is by , and , and then what they 're gon na do is take the cd - rom and transfer it to analog tape and give it to a transcription service , that will professor d: and then there will be some things , many things that do n't work out . and that 'll go back to ibm and they 'll , they run their aligner on it and it kicks out things that do n't work , which , the overlaps will certainly be examples of that . , what w we will give them all of it . right ? grad g: so we 'll give them all sixteen channels and they 'll do whatever they want with it . professor d: it 's also wo n't be adding much to the data to give them the mixed . phd f: it 's not right . it does n't it is n't difficult for us to do , phd b: you should that may be all that they want to send off to their transcribers . professor a: ok . related to the conversation with picheny , i need to email him , my shipping address and you need to email them something which you already did . postdoc h: i did . i m emailed them the transcriber url , the on - line , data that adam set up , the url so they can click on an utterance and hear it . and i emailed them the str streamlined conventions which you got a copy of today . professor d: right . and i was gon na m email them the which i have n't yet , a pointer to the web pages that we currently have , cuz in particular they want to see the one with the way the recording room is set up and so on , your page on that . postdoc h: i c - i cc ' ed morgan . i should have sent i should have cc ' ed you as . grad g: not an immediate action item but something we do have to worry about is data formats for higher - level information . grad g: , or d or not even higher level , different level , prosody and all that . we 're gon na have to figure out how we 're gon na annotate that . professor a: w my my u feeling right now on format is you guys have been doing all the work and whatever you want , we 're happy to live with . phd f: here 's a mysterious file buw001edialogueact1785 314998 315022 e phd s^aa^j -1 0 yeah . buw001fdialogueact1784 314997 315025 f phd fh -1 0 and professor d: we also had the , that we were s , that you were gon na get us the eight - hundred number and we 're all gon na we 're gon na call up your communicator thing and we 're gon na be good slash bad , depending on how you define it , users . professor c: now , something that i mentioned earlier to mari and liz is that it 's probably important to get as many non - technical and non - speech people as possible in order to get some realistic users . so if you could ask other people to call and use our system , that 'd be good . cuz we do n't want people who already know how to deal with dialogue systems , professor a: e , it could result in some good bloopers , which is always good for presentations . , anyway professor d: we talked about that we 're getting the recording equipment running at uw . and so it depends , w e they 're , they 're p m if that comes together within the next month , there at least will be , major communications between dan and uw folks professor a: i ' m shooting to try to get it done get it put together by the beginning of august . professor d: but we have it 's pretty we . , he s , he said that it was sitting in some room collecting dust professor a: , and i will email these notes , i ' m not what to do about action items for the data , although , then somebody i somebody needs to tell landay that you want the pads . professor d: i 'll do that . , and he also said something about outside there that came up about the outside text sources , that he may have professor d: some text sources that are close enough to the thing that we can play with them for a language model . phd e: , that was what he was saying was this he this thing that , jason had been working on finds web pages that are thematically related to what you 're talking about . , that 's the idea . so that would be a source of text which is supposedly got the right vocabulary . but it 's very different material . it 's not spoken material , professor a: but but that 's actually what i wanna do . that 's that 's what i wanna work with , is things that s the wrong material but the right da the right source . grad g: un - unfortunately landay told me that jason is not gon na be working on that anymore . he 's switching to other again . professor a: . he seemed when i asked him if he could actually supply data , he seemed a little bit more reluctant . so , i 'll send him email . i 'll put it in an action item that i send him email about it . and if i get something , great . if i do n't get something professor a: , otherwise , if you guys have any papers or i could use , i could use your web pages . that 's what we could do . you ' ve got all the web pages on the meeting recor professor a: i one less action item . use what web pages there are out there on meeting recorders . grad g: , that 's . what his software does is h it picks out keywords and does a google - like search . professor d: there 's there 's some , carnegie mellon , right ? on on meeting recording , phd b: right . and then j there 's th that 's where you would want to eventually be able to have a board or a camera , because of all these classroom grad g: , georgia tech did a very elaborate instrumented room . and i want to try to stay away from that . so professor a: great . that solves that problem . one less action item . ok . that 's good enou that 's all think of . postdoc h: can i ask , one thing ? it relates to data collection and i 'd and we mentioned earlier today , this question of , so , i s i know that from with the near - field mikes some of the problems that come with overlapping speech , are lessened . but i wonder if , is that sufficient or should we consider maybe getting some data gathered in such a way that , u w we would c , p have a meeting with less overlap than would otherwise be the case ? so either by rules of participation , or whatever . now , , it 's true , we were discussing this earlier , that depending on the task so if you ' ve got someone giving a report you 're not gon na have as much overlap . postdoc h: but , i , so we 're gon na have s , non - overlapping samples anyway . but , in a meeting which would otherwise be highly overlapping , is the near - field mike enough or should we have some rules of participation for some of our samples to lessen the overlap ? professor a: i do n't think we should have rules of participation , but we should try to get a variety of meetings . that 's something that if we get the meeting going at uw , that i probably can do more than you guys , cuz you guys are probably mostly going to get icsi people here . but we can get anybody in ee , over and possibly also some cs people , over at uw . so , that there 's a good chance we could get more variety . phd b: they 're still gon na overlap , but mark and others have said that there 's quite a lot of found data from the discourse community that has this characteristic and also the political y , anything that was televised for a third party has the characteristic of not very much overlap . professor d: wasn - but w we were saying before also that the natural language group here had less overlap . so it also depends on the style of the group of people . postdoc h: on the task , and the task . it 's just wanted to , because , it is true people can modify the amount of overlap that they do if they 're asked to . not not entirely modify it , but lessen it if it 's desired . but if that 's sufficient data wanted to be that we will not be having a lot of data which ca n't be processed . professor a: so i ' m just writing here , we 're not gon na try to specify rules of interaction but we 're gon na try to get more variety by i using different groups of people postdoc h: fine . and i , i know that the near f near - field mikes will take care of also the problems to s to a certain degree . professor a: cuz if i recorded some administrative meetings then that may have less overlap , because you might have more overlap when you 're doing something technical and disagreeing or whatever . postdoc h: , as a contributary , so i know that in l in legal depositions people are pr are prevented from overlapping . they 'll just say , you know , " till each person is finished before you say something " . so it is possible to lessen if we wanted to . but but these other factors are fine . wanted to raise the issue . professor a: , the reason why i did n't want to is be why i personally did n't want to is because i wanted it to be as , unintrusive as possi as you could be with these things hanging on you . postdoc h: , that 's always desired . want to be we do n't that we 're able to process , i u , as much data as we can . phd b: so liberman and others were interested in a lot of found data . so there 's lots of recordings that they 're not close - talk mike , and and there 's lots of television , on , political debates and things like that , congre congressional hearings . boring like that . and then the cmu folks and i were on the other side in cuz they had collected a lot of meetings that were like this and said that those are nothing like these meetings . , so there 're really two different kinds of data . and , i we just left it as @ that if there 's found data that can be transformed for use in speech recognition easily , then we would do it , but newly collected data would be natural meetings . professor d: actually , th @ the cmu folk have collected a lot of data . is that is that going to be publicly available , phd b: if people were interested they could talk to them , but i got the feeling there was some politics involved . phd b: but they had multiple mikes and they did do recognition , and they did do real conversations . but as far as i know they did n't offer that data to the community at this meeting . but that could change cuz mark , mark 's really into this . we should keep in touch with him . professor d: , once we send out , we still have n't sent out the first note saying " hey , this list exists " . but but , once we do that professor d: . it 's on i already added that one on my board to do that . hopefully everybody here is on that list . we should at least check that everybody here ? grad g: i added a few people who did n't who i knew had to be on it even though they did n't tell me . postdoc h: , i am . so , i w , just for clarification . so " found data " , they mean like established corpora of linguistics and other fields , right ? phd b: , " found " has , also the meaning that 's it very natural . it 's things occur without any , the pe these people were n't wearing close - talking mikes , but they were recorded anyway , like the congressional hearings and , for legal purposes or whatever . postdoc h: but it includes like standard corpora that have been used for years in linguistics and other fields . phd b: they did n't have to collect it . it 's not " found " in the sense that at the time it was collected for the purpose . phd b: but what he means is that , mark was really a fan of getting as much data as possible from , reams and reams of , of broadcast , phd b: web , tv , radio . but he understands that 's very different than these this type of meeting . phd e: so we should go around and s we should go around and say something interesting that happened at the meeting ? grad g: so , i really liked the idea of what was interesting was the combination of the crosspad and the speech . especially , the interaction of them rather than just note - taking . so , can you determine the interesting points by who 's writing ? can you do special gestures and so on that have , special meaning to the corpora ? i really liked that . postdoc h: , realized there 's another category of interesting things which is that , i found this discussion very , i this question of how you get at queries really interesting . and and the and i and the fact that it 's , nebulous , what that what query it would be because it depends on what your purpose is . so i actually found that whole process of trying to think of what that would involve to be interesting . but that 's not really a specific fact . thought we went around a discussion of the factors involved there , which was worthwhile . phd e: i had a real revelation about taking pictures . i why i did n't do this before and i regret it . so that was very interesting for me . phd e: not that i the boards are n't really related to this meeting . , i will take pictures of them , phd f: i ' m gon na pass because i ca n't , of the jane took my answer . phd f: , so i ' m gon na pass for the moment but y come back to me . professor a: i think " pass " is socially acceptable . but i will say , i will actually , a spin on different slightly different spin on what you said , this issue of , realizing that we could take minutes , and that actually may be a goal . so that may be the test in a sense , test data , the template of what we want to test against , generating a summary . so that 's an interesting new twist on what we can do with this data . professor c: i agree with jane and eric . the question of how to generate queries automatically was the most interesting question that came up , and it 's something that , as you said , is a whole research topic in itself , so i do n't think we 'll be able to do anything on it because we do n't have funding on it , in this project . but , it 's definitely something i would want to do something on . professor d: , being more management lately than research , the thing that impressed me most was the people dynamics and not any of the facts . that is , i really enjoyed hanging out with this group of people today . so that 's what really impressed me . grad g: that actually has come up a couple times in queries . i was talking to landay and that was one of his examples . when when did people laugh ? postdoc h: h do we need do i need to turn something off here , or i do unplug this , or ? ###summary: the discussion concerned mainly ideas about data collection and the nature and generation of queries on meetings. meeting notes taken by participants as standard minutes or summaries , or on devices like crosspads can provide useful information. there is also interest in the speech community for fusion of speech with visual data. taking some photos of the whiteboard and the positioning of participants is easy enough to do. another option would be the recording by participants of short oral summaries of the meeting. summaries could be used to bootstrap for queries , the exact nature of which remained nebulous. candidate types are keyword searches , action items , elaboration on points of interest , and agreement between participants. an initial prototype system to test any hypotheses can be pipelined. the recorded data will be stored on cd-rom's and sent to ibm for transcription. there is also work being done on the annotation of prosody. the corpus could be enriched with found data ( public or collected by other projects ) , if those prove appropriate for use in the project. finally , project web pages and mailing list are being set up and uw are going to investigate the suitability of their recording equipment. within the piloting of data collection ideas , it was decided that crosspads are going to be used for detection of "hot points" during the meeting. other ideas to be tested are the use of summaries or minutes -if a group normally produce them- and the recording of oral summaries by individual participants after the meeting. photographing the contents of the board and the positions of the meeting participants will provide extra information. for query generation purposes , all participants will also be asked for their highlight of the meeting. the general goal regarding the corpus is to investigate the acquisition of further appropriate data through public sources or available collections of other institutions. as to recordings at icsi , the group agreed that imposing rules of participation in order to avoid speaker overlaps was not desirable. instead , they will aim for collecting stylistically varied data ( different group dynamics and types of meeting ). further action will also be taken to close other pending issues: the web pages will be organised , the recording room will be finalised and uw will also test their recording infrastructure. the recording of meetings and any possible additional tasks must be set up in a user-friendly way , otherwise it would be difficult to recruit volunteers. asking participants to do more than they normally would in a meeting could put people off. the video-recording of meetings , apart from adding an extra level of instrumentation complexity , can also make people apprehensive. the usability and usefulness of crosspads is not certain. acquiring data from other sources will not be straightforward , as they may either not be suitable for this project or not publicly available. querying is also a major issue: what users would ask from a system is not clear. how this system would resolve high-level queries ( eg regarding agreement between participants ) is also hard to tell at this stage. as transcription has not started yet , there was concern as to how ibm will deal with multi-channel data. the abundance of speaker overlaps may also affect the quality of the trascription. however , it was accepted that some problems with the transcription of jargon are , to an extent , unavoidable. five hours of recorded pilot data are already available. ibm , who will be carrying out the transcribing , have been emailed the url's for the online data set-up and for the transcribing tool. some work has already been done with the annotation of speech features like prosody.
9
professor f: so the what w we h have been doing i they would like us all to read these digits . but we do n't all read them but a couple people read them . grad b: ok and the way you do it is you just read the numbers not as each single , so just like i do it . professor f: ok . let 's be done with this . this is ami , who and this is tilman and ralf . professor f: hi . so we 're gon na try to finish by five so people who want to can go hear nancy chang 's talk , downstairs . and you guys are g giving talks on tomorrow and wednesday lunch times , professor f: right ? that 's great . ok so , do y do what we 're gon na do ? grad b: two things we 'll introduce ourselves and what we do . and we already talked with andreas , thilo and david and some lines of code were already written today and almost tested and just gon na say we have again the recognizer to parser thing where we 're working on and that should be no problem and then that can be developed as needed when we get enter the tourism domain . we have talked this morning with the with tilman about the generator . grad b: and there one of our diligent workers has to volunteer to look over tilman 's shoulder while he is changing the grammars to english because w we have we face two ways . either we do a syllable concatenating grammar for the english generation which is starting from scratch and doing it the easy way , or we simply adopt the more in - depth style that is implemented in the german system and are then able not only to produce strings but also the syntactic parse not the syntactic tree that is underneath in the syntactic structure which is the way we decided we were gon na go because a , it 's easier in the beginning and it does require some knowledge of those grammars and some ling linguistic background . but it should n't be a problem for anyone . professor f: ok so that sounds good . johno , are you gon na have some time t to do that w with these guys ? cuz y you 're the grammar maven . professor f: it makes sense , does n't it ? good . so , that 's probably the right way to do that . and an , so i actually wanna f to find out about it too , but i may not have time to get in . grad b: the ultimate goal is that before they leave we can run through the entire system input through output on at least one or two sample things . and and by virtue of doing that then in this case johno will have acquired the knowledge of how to extend it . ad infinitum . when needed , if needed , when wanted and . grad b: and also ralf has hooked up with david and you 're gon na continue either all through tonight or tomorrow on whatever to get the er parser interface working . they are thinning out and thickening out lattices and doing this to see what works best . professor f: ok , before you got put to work ? ok , so that 's one branch is to get us caught up on what 's going on . also it would be really to the plans are , in addition to what 's already in code . and we can d w was there a time when we were set up to do that ? it probably will work better if we do it later in the week , after we actually understand better what 's going on . so when do you guys leave ? professor f: , ok , so ok , so so anyt we 'll find a time later in the week to get together and talk about your understanding of what smartkom plans are . and how we can change them . grad b: should we already set a date for that ? might be beneficial while we 're all here . professor f: ok ? what what does not work for me is thursday afternoon . do earlier in the day on thursday , or most of the time on friday , not all . professor f: thilo . ok maybe we 'll see if david could make it . that would be good . grad b: ok so facing to what we ' ve been doing here for one thing we 're also using this room to collect data . not this type of data , grad b: no not meeting data but sort our version of a wizard experiment such not like the ones in munich but pretty close to it . the major difference to the munich ones is that we do it via the telephone even though all the recording is done here and so it 's a computer call system that gives you tourist information tells you how to get places . and it breaks halfway through the experiment and a human operator comes on . and part of that is trying to find out whether people change their linguistic verbal behavior when first thinking they speak to a machine and then to a human . and we 're setting it up so that we can we hope to implant certain intentions in people . we have first looked at a simple sentence that " how do i get to the powder - tower ? " ok so you have the castle of heidelberg and there is a tower and it 's called powder - tower . and so what will you parse out of that sentence ? probably something that we specified in m - three - l , that is @ " action go to whatever domain , object whatever powder - tower " . and maybe some model will tell us , some gps module , in the mobile scenario where the person is at the moment . and we ' ve gone through that once before in the deep mail project and we noticed that first of all what are i should ' ve brought some slides , but what our so here 's the tower . think of this as a two - dimensional representation of the tower . and our system led people here , to a point where they were facing a wall in front of the tower . there is no entrance there , but it just happens to be the closest point of the road network to the geometric center because that 's how the algorithm works . so we took out that part of the road network as a hack and then it found actually the way to the entrance . which was now the closest point of the road network to ok , geometric center . but what we actually observed in heidelberg is that most people when they want to go there they actually do n't want to enter , because it 's not really interesting . they wanna go to a completely different point where they can look at it and take a picture . and so what a s you s let 's say a simple parse from a s from an utterance wo n't really give us is what the person actually wants . does he wanna go there to see it ? does he wanna go there now ? later ? how does the person wanna go there ? is that person more likely to want to walk there ? walk a scenic route ? and . there are all kinds of decisions that we have identified in terms of getting to places and in terms of finding information about things . and we are constructing and then we ' ve identified more or less the extra - linguistic parameters that may f play a role . information related to the user and information related to the situation . and we also want to look closely on the linguistic information that what we can get from the utterance . that 's part of why we implant these intentions in the data collection to see whether people actually phrase things differently whether they want to enter in order to buy something or whether they just wanna go there to look at it . and so the idea is to construct suitable interfaces and a belief - net for a module that actually tries to what the underlying intention was . and then enrich or augment the m - three - l structures with what it thought what more it got out of that utterance . so if it can make a good suggestion , " hey ! " , " that person does n't wanna enter . that person just wants to take a picture , " cuz he just bought film , or " that person wants to enter because he discussed the admission fee before " . or " that person wants to enter because he wants to buy something and that you usually do inside of buildings " and . these these types of these bits of additional information are going to be embedded into the m - three - l structure in an subfield that we have reserved . and if the action planner does something with it , great . if not , then that 's also something that we ca n't really at least we want to offer the extra information . we do n't really we 're not too worried . t s ultimately if you have if you can offer that information , somebody 's gon na s do something with it sooner or later . that 's part of our belief . grad b: , right now i know the gis from email is not able to calculate these viewpoints . so that 's a functionality that does n't exist yet to do that dynamically , but if we can offer it that distinction , maybe somebody will go ahead and implement it . surely nobody 's gon na go ahead and implement it if it 's never gon na be used , so . what have i forgotten about ? , how we do it , professor f: no no . it 's a good time to pause . i s i see questions on peoples ' faces , so why do n't let 's let 's hear phd a: the obvious one would be if you envision this as a module within smartkom , where exactly would that sit ? that 's the d grad b: so far i ' ve thought of it as adding it onto the modeler knowledge module . grad b: but it could sit anywhere in the attention - recognition this is what attention - recognition literally can phd a: f from my understanding of what the people at phillips were originally trying to do does n't seem to quite fit into smartkom currently so what they 're really doing right now is only selecting among the alternatives , the hypotheses that they 're given enriched by the domain knowledge and the discourse modeler and so on . so if this is additional information that could be merged in by them . and then it would be available to action planning and others . professor f: let 's that w ok that was one question . is there other things that cuz we wanna not pa - pass over any , questions or concerns that you have . phd a: there 're two levels of giving an answer and i on both levels i do n't have any further questions . the two levels will be as far as i ' m concerned as standing here for the generation module and the other is my understanding of what smartkom is supposed to be professor f: so , let me let me s expand on that a little bit from the point of view of the generation . so the idea is that we ' ve actually got this all laid out an and we could show it to you ig robert did n't bring it today but there 's a belief - net which is there 's a first cut at a belief - net that does n't it is n't fully instantiated , and in particular some of the combination rules and ways of getting the conditional probabilities are n't there . but we believe that we have laid out the fundamental decisions in this little space and the things that influence them . so one of the decisions is what we call this ave thing . do you want to access , view or enter a thing . so that 's a discrete decision . there are only three possibilities and the what one would like is for this , knowledge modeling module to add which of those it is and give it to the planner . but , th the current design suggests that if it seems to be an important decision and if the belief - net is equivocal so that it does n't say that one of these is much more probable than the other , then an option is to go back and ask for the information you want . alright ? now there are two ways one can go a imagine doing that . for the debugging we 'll probably just have a drop - down menu and the while you 're debugging you will just but for a full system , then one might very formulate a query , give it to the dialogue planner and say this , ar are you planning to enter ? or whatever it whatever that might be . so that 's under that model then , there would be a loop in which this thing would formulate a query , presumably give it to you . that would get expressed and then hopefully , you 'd get an answer back . and that would the answer would have to be parsed . right and ok so , th that , we probably wo n't do this early on , because the current focus is more on the decision making and like that . but while we 're on the subject wanted to give you a head 's up that it could be that some months from now we said " ok we 're now ready to try to close that loop " in terms of querying about some of these decisions . phd a: so my suggestion then is that you look into the currently ongoing discussion about how the action plans are supposed to look like . and they 're currently agreeing or in the process of agreeing on an x m l - ification of something like a state - transition network of how dialogues would proceed . and the these transition networks will be what the action planner interprets in a sense . phd a: marcus lerkult is actually implementing that and marcus and michael together are leading the discussion there , professor f: the transition diagrams . and it may be that we should early on make that they have the flexibility that we need . grad b: but they have i understood this right ? they they govern more or less the dialogue behavior or the action it 's not really what you do with the content of the dialogue but it 's so , there is this interf professor f: so there 's ac so there th the word " action " , ok , is what 's ambiguous here . so , one thing is there 's an actual planner that tells the person in the tourist domain now , per tells the person how to go , first go here , bed009ddialogueact394 128617 128657 d grad b -1 0 mm - . bed009fdialogueact393 128579 128658 f professor s^rt -1 0 first go there bed009fdialogueact395 128718 128839 f professor s^rt -1 0 uh , take a bus , whatever it is . so that 's that form of planning , and action , and a route planner and gis , all . but that is n't what you mean . phd a: no . no , in smartkom terminology that 's called a function that 's modeled by a function modeler . and it 's th that 's completely encapsulated from th the dialogue system . that 's simply a functionality that you give data as in a query and then you get back from that mmm , a functioning model which might be a planner or a vcr or whatever . some result and that 's then used . professor f: so that 's what . so action he action here means dia speech ac dialogue act . professor f: , tha it 's not going to that 's not going to be good enough . i don what i meant by that . so the idea of having a , transition diagram for the grammar of conversations is a good idea . and that we do hav definitely have to get in on it and find out ok . but that when so , when you get to the tourist domain it 's not just an information retrieval system . professor f: right ? so this i this is where this people are gon na have to think this through a bit more carefully . so , if it 's only like in the film and t v thing , ok , you can do this . and you just get information and give it to people . but what happens when you actually get them moving and so on , y your i d the notion of this as a self contained module th the functional module that interacts with where the tourism g is going probably is too restrictive . now how much people have thought ahead to the tourist domain in this phd a: probably not enough , an another more basic point there is that the current tasks and therefore th the concepts in this ac what 's called the action plan and what 's really the dialogue manager . is based on slots that have to be filled and the values in these slots would be fixed things like the a time or a movie title like this phd a: whereas in the a tourist domain it might be an entire route . set - based , or even very complex structured information in these slots and i ' m not if complex slots of that type are really being taken into consideration . so that 's really something we professor f: could you could you put a message into the right place to see if we can at least ask that question ? grad b: and it might actually also because again in deep map we have faced and implemented those problems once already maybe we can even shuffle some know how from there to markus and michael . and mmm you ok th i 'll talk to michael it 's what i do anyway . who how far is the m - three - l specification for the la natural language input gone on the i have n't seen anything for the tourist path domain . grad b: and you are probably also involved in that , together with the usual gang , petra and jan grad b: ok because that 's those are the true key issues is how does the whatever comes out of the language input pipeline look like and then what the action planner does with it and how that is specified . i did n't think of the internal working of the action planner and the language the function model as relevant . because what they take is this fixed representation of a of an intention . and that can be as detailed or as crude as you want it to be . but the internal workings of the whether there 're dialogue action planners that work with belief - nets that are action planners that work with state automata . so that should n't really matter too much . it does matter because it does have to keep track of you we are on part six of r a route that consists of eight steps and professor f: , th there are a lot of reasons why it matters . ok , so that , the i it 's the action planner is going to take some spec and s make some suggestions about what the user should do . what the user says after that is going to be very much caught up with what the action planner told it . if the parser and the language end does n't the person 's been told th it 's you 're making your life much more difficult than it has to be . so if someone says the best t to go there is by taxi , let 's say . now the planner comes out and says you wanna get there fast , take a taxi . and the language end does n't know that . ok , there 's all sorts of dialogues that wo n't make any sense which would be just fine . phd a: that would b but that point has been realized and it 's not really been defined yet but there 's gon na be some feedback and input from the action planner into all the analysis modules , telling them what to expect and what the current state of the discourse is . beyond what 's currently being implemented which is just word lists . professor f: , but this is not the st this is not just the state of the discourse . professor f: ok so it z and s , it 's great if people are already taking that into account . but one would have t have to see the details . phd a: the specifics are n't really there yet . so , there 's work to do there . professor f: so anyway , robert , that 's why i was thinking that you 're gon na need we talked about this several times that the input end is gon na need a fair amount of feedback from the planning end . in in one of these things which are much more continuous than the just the dialogue over movies and . phd a: and even on a more basic level the action planner actually needs to be able to have an expressive power that can deal with these structures . and not just say the dialogue will consist of ten possible states and th these states really are fixed in a certain sense . professor f: would there be any chance of getting the terminology changed so that the dialogue planner was called a " dialogue planner " ? because there 's this other thing the o there 's this other thing in the tourist domain which is gon na be a route planner phd a: it oughta be called a dialogue manager . cuz that 's what everybody else calls it . professor f: i would think , so , s so what would happen if we sent a note saying " gee we ' ve talked about this and could n't we change this th the whole word ? " i have no idea how complicated these things are . phd a: depends on who you talk to how . we 'll see . i 'll go check , i completely agree . , and this is just for historical reasons within , the preparation phase of the project and not because somebody actually believes it ought to be action planner . so if there is resistance against changing it , that 's just because " , we do n't want to change things . " that that not deep reason professor f: anyway . i if that c in persists then we 're gon na need another term . for the thing that actually does the planning of the routes and whatever we are doing for the tourist . professor f: , but that 's not g tha that ha has all the wrong connotations . it 's it sounds like it 's stand alone . it does n't interact , it does n't that 's why i ' m saying . you ca n't it 's fine for looking up when t when the show 's on tv . you go to th but i it 's really wrong headed for something that you that has a lot of state , it 's gon na interact co in a complicated way with the understanding parts . grad b: just the spatial planner and the route planner i showed you once the interac action between them among them in the deep map system so a printout of the communication between those two fills up i how many pages and that 's just part of how do i get to one place . it 's really insane . and but so this is definitely a good point to get michael into the discussion . or to enter his discussion , actually . phd a: , he 's he started january . and he 's gon na be responsible for the implementation of this action planner . dialogue manager . phd a: no , no he 's completely gon na rewrite everything . in java . ok so that 's interesting . grad b: yes i was just that 's my next question whether we 're gon na stick to prolog or not . grad b: but i do think the function modeling concept has a certain makes sense in a certain light because the action planner should not be or the dialogue manager in that case should not w have to worry about whether it 's interfacing with something that does route planning in this way or that way grad b: and it ca nt formulate its what it wants in a rather a abstract way , f " find me a good route for this . " it does n't really have to worry ab how route planner a or how route planner b actually wants it . so this is seemed like a good idea . in the beginning . professor f: it 's tricky . it 's tricky because one could imagine it will turn out to be the case that , this thing we 're talking about , th the extended n knowledge modeler will fill in some parameters about what the person wants . one could imagine that the next thing that 's trying to fill out the detailed , route planning , let 's say , will also have questions that it would like to ask the user . you could imagine you get to a point where it 's got a choice to make and it just does n't know something . and so y you would like it t also be able to formulate a query . and to run that back through . the dialogue manager and to the output module and back around . and a i a good design would allow that to happen . phd a: but that does n't necessarily contradict an architecture where there really is a pers a def - defined interface . and professor f: but but what it nee but th what the in that case the dialogue manager is event driven . so the dialogue manager may think it 's in a dialogue state of one sort , and this one of these planning modules comes along and says " hey , right now we need to ask a question " . so that forces the dialogue manager to change state . phd a: and and the underlying idea is that there is something like kernel modules with kernel functionality that you can plug certain applications like tourist information or the home scenario with controlling a vcr and so on . and then extend it to an arbitrary number of applications eventually . so would n't that 's an additional reason to have this - defined interface and keep these things like tourist information external . grad b: , there is another philosophical issue that you can evade but , at least it makes sense to me that sooner or later a service is gon na come and describe itself to you . and that 's what srini is working on in the daml project where you find a gis about that gives you information on berkeley , and it 's gon na be there and tell you what it can do and how it wants to do things . and so you can actually interface to such a system without ever having met it before and the function modeler and a self - description of the external service haggle it out and you can use the same language core , understanding core to interface with planner - a , planner - b , planner - c and . which is , utopian completely utopian at the moment , but slowly , getting into the realm of the contingent . but we are facing much more realistic problems . and language input , is crucial also when you do the deep understanding analysis that we envision . then , the , what is it poverty of the stimulus , yet the m the less we get of that the better . and so we 're thinking , how much syntactic analysis actually happens already in the parser . and whether one could interface to that potentially grad d: , are there currently is no syntactic analysis but in the next release there will be some . unless professor f: s so y we looked at the e current pattern matching thing . and as you say it 's just a surface pattern matcher . , so what are the plans roughly ? grad d: it 's to integrate and syntactic analysis . and add some more features like segmentation . so then an utter more than one utterance is there there 's often pause between it and a segmentation occurs . professor f: so , the so the idea is to have a pa y a particular do you have a particular parser in mind ? is it partic d have you thought through ? is it an hpsg parser ? is it a whatever ? grad d: no no it 's complicated for it 's just one person and so i have to keep the professor f: but the people at d f people at dfki have written a fair number of parsers . other , people over the years . have written various parsers at dfki . none of them are suitable ? i d i ' m asking . i . grad d: , the problem is th that it has to be very fast because if you want to for more than one path anywhere what 's in the latches from the speech recognizer so it 's speed is crucial . and they are not fast enough . and they also have to be very robust . cuz of speech recognition errors and professor f: so , so there was a chunk parser in verbmobil , that was one of the branchers . they d th i c there were these various , competing syntax modules . and i know one of them was a chunk parser and i do n't remember who did that . phd a: i ca n't quite recall whether they actually produced the chunks in the first place . professor f: there w that 's right . they w they had there were this was done with a two phase thing , where the chunk parser itself was pretty stupid professor f: and then there was a trying to fit them together that h used more context . phd a: you s and especially you did some , l was a learning - based approach which learned from a big corpus of trees . phd a: and yes the it the chunk parser was a finite - state machine that mark light originally w worked on in while he was in tuebingen and then somebody else in tuebingen picked that up . so it was done in tuebingen , professor f: but is that the thing y it sounds like the thing that you were thinking of . grad b: the from michael strube , i ' ve heard very good about the chunk parser that is done by forwiss , which is in embassy doing the parsing . so this is came as a surprise to me that , embassy s is featuring a parser but it 's what i hear . one could also look at that and see whether there is some synergy possible . grad b: and they 're doing chunk parsing and it 's give you the names of the people who do it there . but . then there is more ways of parsing things . professor f: but given th the constraints , that you want it to be small and fast and , my is you 're probably into some chunk parsing . and i ' m not a big believer in this statistical , cleaning up it that seems to me a last resort if you ca n't do it any other way . but . it may i may be that 's what you guys finally decide do . and have you looked just again for context there is this one that they did at sri some years ago fastus ? a grad d: , i ' ve looked at it but it 's no not much information available . i found , professor f: it is . it 's it was pretty ambitious . and it was english oriented , grad d: , and purely finite - state transducers are not so good for german since there 's professor f: , i that 's all the morphology and . and english is all th all word order . and it makes a lot more sense . and e , ok . good point . so in german you ' ve got most of this done with grad d: - . also it 's yes , the choice between this processing and that processing and my template matcher . professor f: so what about did y like morfix ? a e y you ' ve got stemmers ? or is that something that professor f: so y you just connect to the lexicon at least for german you have all of the stemming information . grad d: , we can , we have knowledge bases from verbmobil system we can use and so . professor f: but it does n't look like i you 're using it . i did n't n see it being used in the current template parser . i did n't see any we l actually only looked at the english . phd a: but what what 's happening on - line is just a retrieval from the lexicon which would give all the stemming information so it would be a full foreign lexicon . professor f: so it , s i 'd so in german then you actually do case matching and things like in the pattern matcher or not ? professor f: cuz i r i did n't reme i did n't think i saw it . have we looked at the german ? , i haven that 's getting it from the lexicon is just fine . professor f: no problem with that . and here 's the case where the english and the german might really be significantly different . in terms of if you 're trying to build some fast parser and you really might wanna do it in a significantly different way . so you ' ve you guys have looked at this ? also ? in terms of , w if you 're doing this for english as german do you think now that it would be this doing it similarly ? grad d: it 's yes , it 's possible to do list processing . and maybe this is more adequate for english and in german set processing is used . grad b: there 's m i ' m there 's gon na be more discussion on that after your talk . professor f: now actually , are you guys free at five ? or do you have to go somewhere at five o ' clock tonight ? w in ten minutes ? professor f: that 's good , because that will tell you a fair amount about the form of semantic construction grammar that we 're using . so so i th that probably as good an introduction as you 'll get . to the form of conceptual grammar that w we have in mind for this . it wo n't talk particularly about how that relates to what robert was saying at the beginning . but let me give you a very short version of this . so we talked about the fact that there 're going to be a certain number of decisions that you want the knowledge modeler to make , that will be then fed to the function module , that does , route planning . it 's called the " route planner " . so there are these decisions . and then one half of this we talked about at little bit is how if you had the right information , if you knew something about what was said and about th the something about was the agent a tourist or a native or a business person or young or old , whatever . that information , and also about the , what we 're calling " the entity " , is it a castle , is it a bank ? is it a s town square , is it a statue ? whatever . so all that information could be combined into decision networks and give you decisions . but the other half of the problem is how would you get that information from the parsed input ? so , so what you might try to do is just build more templates , saying we 're trying to build a templ build a template that w somehow would capture the fact that he wants to take a picture . and and we could you could do this . and it 's a small enough domain that probably you , you could do this . but from our point of view this is also a research project and there are a couple of people not here for various reasons who are doing doctoral dissertations on this , and the idea that we 're really after is a very deep semantics based on cognitive linguistics and the notion that there are a relatively small number of primitive conceptual schemas that characterize a lot of activity . so a typical one in this formulation is a container . so this is a static thing . and the notion is that all sorts of physical situations are characterized in terms of containers . going in and out the portals and con but also , importantly for lakoff and these guys is all sorts of metaphorical things are also characterized this way . you get in trouble and et cetera and so s so , what we 're really trying to do is to map from the discourse to the conceptual semantics level . and from there to the appropriate decisions . so another one of these primitive , what are called " image schemas " , is goal seeking . so this a notion of a source , path , goal , trajector , possibly obstacles . and the idea is this is another conceptual primitive . and that all sorts of things , particularly in the tourist domain , can be represented in terms of source , path and goal . so the idea would be could we build an analyser that would take an utterance and say " aha ! th this utterance is talking about an attempt to reach a goal . the goal is this , the pers the , traveller is that , the sor w where we are at now is this , they ' ve mentioned possible obstacles , et cetera . " so th the and this is an again attempt to get very wide coverage . so if you can do this , then the notion would be that across a very large range of domains , you could use this deep conceptual basis as the interface . and then , the processing of that , both on the input end , recognizing that certain words in a language talk about containers or goals , et cetera , and on the output end , given this information , you can then make decisions about what actions to take . provides , they claim , a very powerful , general notion of deep semantics . so that 's what we 're really doing . and nancy is going to her talk is going to be not about using this in applications , but about modeling how children might learn this deep semantic grammar . phd a: and how do you envision the this deep semantic to be worked with . would it be highly ambiguous if and then there would be another module that takes that highly underspecified deep semantic construction and map it onto the current context to find out what the person really was talking about in that context . or a professor f: that 's where the belief - net comes in . so th the idea is , let 's take this business about going to the powder - tower . so part of what you 'll get out of this will be the fact tha w if it works right , ok , that this is an agent that wants to go to this place and that 's their goal and there will be additional situational information . professor f: part of it comes from the ontology . the tower is this object . part of it comes from the user model . and the idea of the belief - net is it combines the information from the dialogue which comes across in this general way , this is a goal seeking behavior , along with specific information from the ontology about the kinds of objects involved professor f: and about the situation about " is it raining ? " whatever it is . and so that 's the belief - net that we ' ve laid out . and so th the coupling to the situation comes in this model from , at th at the belief - net , combining evidence from the dialogue with the ontology with the situation . but nancy is n't gon na talk about that ,
the berkeley even deeper understanding group discussed plans and concerns regarding the architecture of smartkom , its proposed modules , and the types of interactions expected to take place between modules. the meeting was largely focused on smartkom's decision making capacity and how to adapt this functionality to the tourist information domain. the group set a date for assessing smartkom plans. it was decided that smartkom's action plans should be represented in xml as a state transition network. it was proposed that the term 'dialogue planner' should replace 'dialogue manager'. prolog will be phased out completely and replaced by java code. the dialogue manager must be capable of changing states , i.e . go from being event driven to answering a question from a planning module. smartkom should feature a well defined core interface , with domain-specific information kept external. a syntactic analysis component that performs chunk parsing will be added to the system. as a functional module , the action planner is too restrictive for the tourist domain and requires complex slots from the dialogue manager. what form will the language input have , and what will the action planner do with it? links must be in place between the input end , action planner , parser , and language feedback components for communicating the current state of plan. interactions in a deep map system between the spatial planner and the route planner are too convoluted. smartkom requires a fast and robust parser that includes language-specific extensions. which form of semantic construction grammar should be used , and how would such information be derived from the parsed input? efforts are in progress to complete and test the code , generate an english grammar like that used in the german system , and get the parser interface working. a 'wizard of oz' style data collection experiment is in progress to model users' underlying intentions when communicating with the dialogue component of a tourist domain gps. corresponding interfaces and a belief net will be incorporated into the knowledge modelling module. while the current focus is on decision making , a fuller implementation of smartkom will enable it to query the user for desired information. stemming information is connected to the lexicon through the use of knowledge bases from verbmobil.
###dialogue: professor f: so the what w we h have been doing i they would like us all to read these digits . but we do n't all read them but a couple people read them . grad b: ok and the way you do it is you just read the numbers not as each single , so just like i do it . professor f: ok . let 's be done with this . this is ami , who and this is tilman and ralf . professor f: hi . so we 're gon na try to finish by five so people who want to can go hear nancy chang 's talk , downstairs . and you guys are g giving talks on tomorrow and wednesday lunch times , professor f: right ? that 's great . ok so , do y do what we 're gon na do ? grad b: two things we 'll introduce ourselves and what we do . and we already talked with andreas , thilo and david and some lines of code were already written today and almost tested and just gon na say we have again the recognizer to parser thing where we 're working on and that should be no problem and then that can be developed as needed when we get enter the tourism domain . we have talked this morning with the with tilman about the generator . grad b: and there one of our diligent workers has to volunteer to look over tilman 's shoulder while he is changing the grammars to english because w we have we face two ways . either we do a syllable concatenating grammar for the english generation which is starting from scratch and doing it the easy way , or we simply adopt the more in - depth style that is implemented in the german system and are then able not only to produce strings but also the syntactic parse not the syntactic tree that is underneath in the syntactic structure which is the way we decided we were gon na go because a , it 's easier in the beginning and it does require some knowledge of those grammars and some ling linguistic background . but it should n't be a problem for anyone . professor f: ok so that sounds good . johno , are you gon na have some time t to do that w with these guys ? cuz y you 're the grammar maven . professor f: it makes sense , does n't it ? good . so , that 's probably the right way to do that . and an , so i actually wanna f to find out about it too , but i may not have time to get in . grad b: the ultimate goal is that before they leave we can run through the entire system input through output on at least one or two sample things . and and by virtue of doing that then in this case johno will have acquired the knowledge of how to extend it . ad infinitum . when needed , if needed , when wanted and . grad b: and also ralf has hooked up with david and you 're gon na continue either all through tonight or tomorrow on whatever to get the er parser interface working . they are thinning out and thickening out lattices and doing this to see what works best . professor f: ok , before you got put to work ? ok , so that 's one branch is to get us caught up on what 's going on . also it would be really to the plans are , in addition to what 's already in code . and we can d w was there a time when we were set up to do that ? it probably will work better if we do it later in the week , after we actually understand better what 's going on . so when do you guys leave ? professor f: , ok , so ok , so so anyt we 'll find a time later in the week to get together and talk about your understanding of what smartkom plans are . and how we can change them . grad b: should we already set a date for that ? might be beneficial while we 're all here . professor f: ok ? what what does not work for me is thursday afternoon . do earlier in the day on thursday , or most of the time on friday , not all . professor f: thilo . ok maybe we 'll see if david could make it . that would be good . grad b: ok so facing to what we ' ve been doing here for one thing we 're also using this room to collect data . not this type of data , grad b: no not meeting data but sort our version of a wizard experiment such not like the ones in munich but pretty close to it . the major difference to the munich ones is that we do it via the telephone even though all the recording is done here and so it 's a computer call system that gives you tourist information tells you how to get places . and it breaks halfway through the experiment and a human operator comes on . and part of that is trying to find out whether people change their linguistic verbal behavior when first thinking they speak to a machine and then to a human . and we 're setting it up so that we can we hope to implant certain intentions in people . we have first looked at a simple sentence that " how do i get to the powder - tower ? " ok so you have the castle of heidelberg and there is a tower and it 's called powder - tower . and so what will you parse out of that sentence ? probably something that we specified in m - three - l , that is @ " action go to whatever domain , object whatever powder - tower " . and maybe some model will tell us , some gps module , in the mobile scenario where the person is at the moment . and we ' ve gone through that once before in the deep mail project and we noticed that first of all what are i should ' ve brought some slides , but what our so here 's the tower . think of this as a two - dimensional representation of the tower . and our system led people here , to a point where they were facing a wall in front of the tower . there is no entrance there , but it just happens to be the closest point of the road network to the geometric center because that 's how the algorithm works . so we took out that part of the road network as a hack and then it found actually the way to the entrance . which was now the closest point of the road network to ok , geometric center . but what we actually observed in heidelberg is that most people when they want to go there they actually do n't want to enter , because it 's not really interesting . they wanna go to a completely different point where they can look at it and take a picture . and so what a s you s let 's say a simple parse from a s from an utterance wo n't really give us is what the person actually wants . does he wanna go there to see it ? does he wanna go there now ? later ? how does the person wanna go there ? is that person more likely to want to walk there ? walk a scenic route ? and . there are all kinds of decisions that we have identified in terms of getting to places and in terms of finding information about things . and we are constructing and then we ' ve identified more or less the extra - linguistic parameters that may f play a role . information related to the user and information related to the situation . and we also want to look closely on the linguistic information that what we can get from the utterance . that 's part of why we implant these intentions in the data collection to see whether people actually phrase things differently whether they want to enter in order to buy something or whether they just wanna go there to look at it . and so the idea is to construct suitable interfaces and a belief - net for a module that actually tries to what the underlying intention was . and then enrich or augment the m - three - l structures with what it thought what more it got out of that utterance . so if it can make a good suggestion , " hey ! " , " that person does n't wanna enter . that person just wants to take a picture , " cuz he just bought film , or " that person wants to enter because he discussed the admission fee before " . or " that person wants to enter because he wants to buy something and that you usually do inside of buildings " and . these these types of these bits of additional information are going to be embedded into the m - three - l structure in an subfield that we have reserved . and if the action planner does something with it , great . if not , then that 's also something that we ca n't really at least we want to offer the extra information . we do n't really we 're not too worried . t s ultimately if you have if you can offer that information , somebody 's gon na s do something with it sooner or later . that 's part of our belief . grad b: , right now i know the gis from email is not able to calculate these viewpoints . so that 's a functionality that does n't exist yet to do that dynamically , but if we can offer it that distinction , maybe somebody will go ahead and implement it . surely nobody 's gon na go ahead and implement it if it 's never gon na be used , so . what have i forgotten about ? , how we do it , professor f: no no . it 's a good time to pause . i s i see questions on peoples ' faces , so why do n't let 's let 's hear phd a: the obvious one would be if you envision this as a module within smartkom , where exactly would that sit ? that 's the d grad b: so far i ' ve thought of it as adding it onto the modeler knowledge module . grad b: but it could sit anywhere in the attention - recognition this is what attention - recognition literally can phd a: f from my understanding of what the people at phillips were originally trying to do does n't seem to quite fit into smartkom currently so what they 're really doing right now is only selecting among the alternatives , the hypotheses that they 're given enriched by the domain knowledge and the discourse modeler and so on . so if this is additional information that could be merged in by them . and then it would be available to action planning and others . professor f: let 's that w ok that was one question . is there other things that cuz we wanna not pa - pass over any , questions or concerns that you have . phd a: there 're two levels of giving an answer and i on both levels i do n't have any further questions . the two levels will be as far as i ' m concerned as standing here for the generation module and the other is my understanding of what smartkom is supposed to be professor f: so , let me let me s expand on that a little bit from the point of view of the generation . so the idea is that we ' ve actually got this all laid out an and we could show it to you ig robert did n't bring it today but there 's a belief - net which is there 's a first cut at a belief - net that does n't it is n't fully instantiated , and in particular some of the combination rules and ways of getting the conditional probabilities are n't there . but we believe that we have laid out the fundamental decisions in this little space and the things that influence them . so one of the decisions is what we call this ave thing . do you want to access , view or enter a thing . so that 's a discrete decision . there are only three possibilities and the what one would like is for this , knowledge modeling module to add which of those it is and give it to the planner . but , th the current design suggests that if it seems to be an important decision and if the belief - net is equivocal so that it does n't say that one of these is much more probable than the other , then an option is to go back and ask for the information you want . alright ? now there are two ways one can go a imagine doing that . for the debugging we 'll probably just have a drop - down menu and the while you 're debugging you will just but for a full system , then one might very formulate a query , give it to the dialogue planner and say this , ar are you planning to enter ? or whatever it whatever that might be . so that 's under that model then , there would be a loop in which this thing would formulate a query , presumably give it to you . that would get expressed and then hopefully , you 'd get an answer back . and that would the answer would have to be parsed . right and ok so , th that , we probably wo n't do this early on , because the current focus is more on the decision making and like that . but while we 're on the subject wanted to give you a head 's up that it could be that some months from now we said " ok we 're now ready to try to close that loop " in terms of querying about some of these decisions . phd a: so my suggestion then is that you look into the currently ongoing discussion about how the action plans are supposed to look like . and they 're currently agreeing or in the process of agreeing on an x m l - ification of something like a state - transition network of how dialogues would proceed . and the these transition networks will be what the action planner interprets in a sense . phd a: marcus lerkult is actually implementing that and marcus and michael together are leading the discussion there , professor f: the transition diagrams . and it may be that we should early on make that they have the flexibility that we need . grad b: but they have i understood this right ? they they govern more or less the dialogue behavior or the action it 's not really what you do with the content of the dialogue but it 's so , there is this interf professor f: so there 's ac so there th the word " action " , ok , is what 's ambiguous here . so , one thing is there 's an actual planner that tells the person in the tourist domain now , per tells the person how to go , first go here , bed009ddialogueact394 128617 128657 d grad b -1 0 mm - . bed009fdialogueact393 128579 128658 f professor s^rt -1 0 first go there bed009fdialogueact395 128718 128839 f professor s^rt -1 0 uh , take a bus , whatever it is . so that 's that form of planning , and action , and a route planner and gis , all . but that is n't what you mean . phd a: no . no , in smartkom terminology that 's called a function that 's modeled by a function modeler . and it 's th that 's completely encapsulated from th the dialogue system . that 's simply a functionality that you give data as in a query and then you get back from that mmm , a functioning model which might be a planner or a vcr or whatever . some result and that 's then used . professor f: so that 's what . so action he action here means dia speech ac dialogue act . professor f: , tha it 's not going to that 's not going to be good enough . i don what i meant by that . so the idea of having a , transition diagram for the grammar of conversations is a good idea . and that we do hav definitely have to get in on it and find out ok . but that when so , when you get to the tourist domain it 's not just an information retrieval system . professor f: right ? so this i this is where this people are gon na have to think this through a bit more carefully . so , if it 's only like in the film and t v thing , ok , you can do this . and you just get information and give it to people . but what happens when you actually get them moving and so on , y your i d the notion of this as a self contained module th the functional module that interacts with where the tourism g is going probably is too restrictive . now how much people have thought ahead to the tourist domain in this phd a: probably not enough , an another more basic point there is that the current tasks and therefore th the concepts in this ac what 's called the action plan and what 's really the dialogue manager . is based on slots that have to be filled and the values in these slots would be fixed things like the a time or a movie title like this phd a: whereas in the a tourist domain it might be an entire route . set - based , or even very complex structured information in these slots and i ' m not if complex slots of that type are really being taken into consideration . so that 's really something we professor f: could you could you put a message into the right place to see if we can at least ask that question ? grad b: and it might actually also because again in deep map we have faced and implemented those problems once already maybe we can even shuffle some know how from there to markus and michael . and mmm you ok th i 'll talk to michael it 's what i do anyway . who how far is the m - three - l specification for the la natural language input gone on the i have n't seen anything for the tourist path domain . grad b: and you are probably also involved in that , together with the usual gang , petra and jan grad b: ok because that 's those are the true key issues is how does the whatever comes out of the language input pipeline look like and then what the action planner does with it and how that is specified . i did n't think of the internal working of the action planner and the language the function model as relevant . because what they take is this fixed representation of a of an intention . and that can be as detailed or as crude as you want it to be . but the internal workings of the whether there 're dialogue action planners that work with belief - nets that are action planners that work with state automata . so that should n't really matter too much . it does matter because it does have to keep track of you we are on part six of r a route that consists of eight steps and professor f: , th there are a lot of reasons why it matters . ok , so that , the i it 's the action planner is going to take some spec and s make some suggestions about what the user should do . what the user says after that is going to be very much caught up with what the action planner told it . if the parser and the language end does n't the person 's been told th it 's you 're making your life much more difficult than it has to be . so if someone says the best t to go there is by taxi , let 's say . now the planner comes out and says you wanna get there fast , take a taxi . and the language end does n't know that . ok , there 's all sorts of dialogues that wo n't make any sense which would be just fine . phd a: that would b but that point has been realized and it 's not really been defined yet but there 's gon na be some feedback and input from the action planner into all the analysis modules , telling them what to expect and what the current state of the discourse is . beyond what 's currently being implemented which is just word lists . professor f: , but this is not the st this is not just the state of the discourse . professor f: ok so it z and s , it 's great if people are already taking that into account . but one would have t have to see the details . phd a: the specifics are n't really there yet . so , there 's work to do there . professor f: so anyway , robert , that 's why i was thinking that you 're gon na need we talked about this several times that the input end is gon na need a fair amount of feedback from the planning end . in in one of these things which are much more continuous than the just the dialogue over movies and . phd a: and even on a more basic level the action planner actually needs to be able to have an expressive power that can deal with these structures . and not just say the dialogue will consist of ten possible states and th these states really are fixed in a certain sense . professor f: would there be any chance of getting the terminology changed so that the dialogue planner was called a " dialogue planner " ? because there 's this other thing the o there 's this other thing in the tourist domain which is gon na be a route planner phd a: it oughta be called a dialogue manager . cuz that 's what everybody else calls it . professor f: i would think , so , s so what would happen if we sent a note saying " gee we ' ve talked about this and could n't we change this th the whole word ? " i have no idea how complicated these things are . phd a: depends on who you talk to how . we 'll see . i 'll go check , i completely agree . , and this is just for historical reasons within , the preparation phase of the project and not because somebody actually believes it ought to be action planner . so if there is resistance against changing it , that 's just because " , we do n't want to change things . " that that not deep reason professor f: anyway . i if that c in persists then we 're gon na need another term . for the thing that actually does the planning of the routes and whatever we are doing for the tourist . professor f: , but that 's not g tha that ha has all the wrong connotations . it 's it sounds like it 's stand alone . it does n't interact , it does n't that 's why i ' m saying . you ca n't it 's fine for looking up when t when the show 's on tv . you go to th but i it 's really wrong headed for something that you that has a lot of state , it 's gon na interact co in a complicated way with the understanding parts . grad b: just the spatial planner and the route planner i showed you once the interac action between them among them in the deep map system so a printout of the communication between those two fills up i how many pages and that 's just part of how do i get to one place . it 's really insane . and but so this is definitely a good point to get michael into the discussion . or to enter his discussion , actually . phd a: , he 's he started january . and he 's gon na be responsible for the implementation of this action planner . dialogue manager . phd a: no , no he 's completely gon na rewrite everything . in java . ok so that 's interesting . grad b: yes i was just that 's my next question whether we 're gon na stick to prolog or not . grad b: but i do think the function modeling concept has a certain makes sense in a certain light because the action planner should not be or the dialogue manager in that case should not w have to worry about whether it 's interfacing with something that does route planning in this way or that way grad b: and it ca nt formulate its what it wants in a rather a abstract way , f " find me a good route for this . " it does n't really have to worry ab how route planner a or how route planner b actually wants it . so this is seemed like a good idea . in the beginning . professor f: it 's tricky . it 's tricky because one could imagine it will turn out to be the case that , this thing we 're talking about , th the extended n knowledge modeler will fill in some parameters about what the person wants . one could imagine that the next thing that 's trying to fill out the detailed , route planning , let 's say , will also have questions that it would like to ask the user . you could imagine you get to a point where it 's got a choice to make and it just does n't know something . and so y you would like it t also be able to formulate a query . and to run that back through . the dialogue manager and to the output module and back around . and a i a good design would allow that to happen . phd a: but that does n't necessarily contradict an architecture where there really is a pers a def - defined interface . and professor f: but but what it nee but th what the in that case the dialogue manager is event driven . so the dialogue manager may think it 's in a dialogue state of one sort , and this one of these planning modules comes along and says " hey , right now we need to ask a question " . so that forces the dialogue manager to change state . phd a: and and the underlying idea is that there is something like kernel modules with kernel functionality that you can plug certain applications like tourist information or the home scenario with controlling a vcr and so on . and then extend it to an arbitrary number of applications eventually . so would n't that 's an additional reason to have this - defined interface and keep these things like tourist information external . grad b: , there is another philosophical issue that you can evade but , at least it makes sense to me that sooner or later a service is gon na come and describe itself to you . and that 's what srini is working on in the daml project where you find a gis about that gives you information on berkeley , and it 's gon na be there and tell you what it can do and how it wants to do things . and so you can actually interface to such a system without ever having met it before and the function modeler and a self - description of the external service haggle it out and you can use the same language core , understanding core to interface with planner - a , planner - b , planner - c and . which is , utopian completely utopian at the moment , but slowly , getting into the realm of the contingent . but we are facing much more realistic problems . and language input , is crucial also when you do the deep understanding analysis that we envision . then , the , what is it poverty of the stimulus , yet the m the less we get of that the better . and so we 're thinking , how much syntactic analysis actually happens already in the parser . and whether one could interface to that potentially grad d: , are there currently is no syntactic analysis but in the next release there will be some . unless professor f: s so y we looked at the e current pattern matching thing . and as you say it 's just a surface pattern matcher . , so what are the plans roughly ? grad d: it 's to integrate and syntactic analysis . and add some more features like segmentation . so then an utter more than one utterance is there there 's often pause between it and a segmentation occurs . professor f: so , the so the idea is to have a pa y a particular do you have a particular parser in mind ? is it partic d have you thought through ? is it an hpsg parser ? is it a whatever ? grad d: no no it 's complicated for it 's just one person and so i have to keep the professor f: but the people at d f people at dfki have written a fair number of parsers . other , people over the years . have written various parsers at dfki . none of them are suitable ? i d i ' m asking . i . grad d: , the problem is th that it has to be very fast because if you want to for more than one path anywhere what 's in the latches from the speech recognizer so it 's speed is crucial . and they are not fast enough . and they also have to be very robust . cuz of speech recognition errors and professor f: so , so there was a chunk parser in verbmobil , that was one of the branchers . they d th i c there were these various , competing syntax modules . and i know one of them was a chunk parser and i do n't remember who did that . phd a: i ca n't quite recall whether they actually produced the chunks in the first place . professor f: there w that 's right . they w they had there were this was done with a two phase thing , where the chunk parser itself was pretty stupid professor f: and then there was a trying to fit them together that h used more context . phd a: you s and especially you did some , l was a learning - based approach which learned from a big corpus of trees . phd a: and yes the it the chunk parser was a finite - state machine that mark light originally w worked on in while he was in tuebingen and then somebody else in tuebingen picked that up . so it was done in tuebingen , professor f: but is that the thing y it sounds like the thing that you were thinking of . grad b: the from michael strube , i ' ve heard very good about the chunk parser that is done by forwiss , which is in embassy doing the parsing . so this is came as a surprise to me that , embassy s is featuring a parser but it 's what i hear . one could also look at that and see whether there is some synergy possible . grad b: and they 're doing chunk parsing and it 's give you the names of the people who do it there . but . then there is more ways of parsing things . professor f: but given th the constraints , that you want it to be small and fast and , my is you 're probably into some chunk parsing . and i ' m not a big believer in this statistical , cleaning up it that seems to me a last resort if you ca n't do it any other way . but . it may i may be that 's what you guys finally decide do . and have you looked just again for context there is this one that they did at sri some years ago fastus ? a grad d: , i ' ve looked at it but it 's no not much information available . i found , professor f: it is . it 's it was pretty ambitious . and it was english oriented , grad d: , and purely finite - state transducers are not so good for german since there 's professor f: , i that 's all the morphology and . and english is all th all word order . and it makes a lot more sense . and e , ok . good point . so in german you ' ve got most of this done with grad d: - . also it 's yes , the choice between this processing and that processing and my template matcher . professor f: so what about did y like morfix ? a e y you ' ve got stemmers ? or is that something that professor f: so y you just connect to the lexicon at least for german you have all of the stemming information . grad d: , we can , we have knowledge bases from verbmobil system we can use and so . professor f: but it does n't look like i you 're using it . i did n't n see it being used in the current template parser . i did n't see any we l actually only looked at the english . phd a: but what what 's happening on - line is just a retrieval from the lexicon which would give all the stemming information so it would be a full foreign lexicon . professor f: so it , s i 'd so in german then you actually do case matching and things like in the pattern matcher or not ? professor f: cuz i r i did n't reme i did n't think i saw it . have we looked at the german ? , i haven that 's getting it from the lexicon is just fine . professor f: no problem with that . and here 's the case where the english and the german might really be significantly different . in terms of if you 're trying to build some fast parser and you really might wanna do it in a significantly different way . so you ' ve you guys have looked at this ? also ? in terms of , w if you 're doing this for english as german do you think now that it would be this doing it similarly ? grad d: it 's yes , it 's possible to do list processing . and maybe this is more adequate for english and in german set processing is used . grad b: there 's m i ' m there 's gon na be more discussion on that after your talk . professor f: now actually , are you guys free at five ? or do you have to go somewhere at five o ' clock tonight ? w in ten minutes ? professor f: that 's good , because that will tell you a fair amount about the form of semantic construction grammar that we 're using . so so i th that probably as good an introduction as you 'll get . to the form of conceptual grammar that w we have in mind for this . it wo n't talk particularly about how that relates to what robert was saying at the beginning . but let me give you a very short version of this . so we talked about the fact that there 're going to be a certain number of decisions that you want the knowledge modeler to make , that will be then fed to the function module , that does , route planning . it 's called the " route planner " . so there are these decisions . and then one half of this we talked about at little bit is how if you had the right information , if you knew something about what was said and about th the something about was the agent a tourist or a native or a business person or young or old , whatever . that information , and also about the , what we 're calling " the entity " , is it a castle , is it a bank ? is it a s town square , is it a statue ? whatever . so all that information could be combined into decision networks and give you decisions . but the other half of the problem is how would you get that information from the parsed input ? so , so what you might try to do is just build more templates , saying we 're trying to build a templ build a template that w somehow would capture the fact that he wants to take a picture . and and we could you could do this . and it 's a small enough domain that probably you , you could do this . but from our point of view this is also a research project and there are a couple of people not here for various reasons who are doing doctoral dissertations on this , and the idea that we 're really after is a very deep semantics based on cognitive linguistics and the notion that there are a relatively small number of primitive conceptual schemas that characterize a lot of activity . so a typical one in this formulation is a container . so this is a static thing . and the notion is that all sorts of physical situations are characterized in terms of containers . going in and out the portals and con but also , importantly for lakoff and these guys is all sorts of metaphorical things are also characterized this way . you get in trouble and et cetera and so s so , what we 're really trying to do is to map from the discourse to the conceptual semantics level . and from there to the appropriate decisions . so another one of these primitive , what are called " image schemas " , is goal seeking . so this a notion of a source , path , goal , trajector , possibly obstacles . and the idea is this is another conceptual primitive . and that all sorts of things , particularly in the tourist domain , can be represented in terms of source , path and goal . so the idea would be could we build an analyser that would take an utterance and say " aha ! th this utterance is talking about an attempt to reach a goal . the goal is this , the pers the , traveller is that , the sor w where we are at now is this , they ' ve mentioned possible obstacles , et cetera . " so th the and this is an again attempt to get very wide coverage . so if you can do this , then the notion would be that across a very large range of domains , you could use this deep conceptual basis as the interface . and then , the processing of that , both on the input end , recognizing that certain words in a language talk about containers or goals , et cetera , and on the output end , given this information , you can then make decisions about what actions to take . provides , they claim , a very powerful , general notion of deep semantics . so that 's what we 're really doing . and nancy is going to her talk is going to be not about using this in applications , but about modeling how children might learn this deep semantic grammar . phd a: and how do you envision the this deep semantic to be worked with . would it be highly ambiguous if and then there would be another module that takes that highly underspecified deep semantic construction and map it onto the current context to find out what the person really was talking about in that context . or a professor f: that 's where the belief - net comes in . so th the idea is , let 's take this business about going to the powder - tower . so part of what you 'll get out of this will be the fact tha w if it works right , ok , that this is an agent that wants to go to this place and that 's their goal and there will be additional situational information . professor f: part of it comes from the ontology . the tower is this object . part of it comes from the user model . and the idea of the belief - net is it combines the information from the dialogue which comes across in this general way , this is a goal seeking behavior , along with specific information from the ontology about the kinds of objects involved professor f: and about the situation about " is it raining ? " whatever it is . and so that 's the belief - net that we ' ve laid out . and so th the coupling to the situation comes in this model from , at th at the belief - net , combining evidence from the dialogue with the ontology with the situation . but nancy is n't gon na talk about that , ###summary: the berkeley even deeper understanding group discussed plans and concerns regarding the architecture of smartkom , its proposed modules , and the types of interactions expected to take place between modules. the meeting was largely focused on smartkom's decision making capacity and how to adapt this functionality to the tourist information domain. the group set a date for assessing smartkom plans. it was decided that smartkom's action plans should be represented in xml as a state transition network. it was proposed that the term 'dialogue planner' should replace 'dialogue manager'. prolog will be phased out completely and replaced by java code. the dialogue manager must be capable of changing states , i.e . go from being event driven to answering a question from a planning module. smartkom should feature a well defined core interface , with domain-specific information kept external. a syntactic analysis component that performs chunk parsing will be added to the system. as a functional module , the action planner is too restrictive for the tourist domain and requires complex slots from the dialogue manager. what form will the language input have , and what will the action planner do with it? links must be in place between the input end , action planner , parser , and language feedback components for communicating the current state of plan. interactions in a deep map system between the spatial planner and the route planner are too convoluted. smartkom requires a fast and robust parser that includes language-specific extensions. which form of semantic construction grammar should be used , and how would such information be derived from the parsed input? efforts are in progress to complete and test the code , generate an english grammar like that used in the german system , and get the parser interface working. a 'wizard of oz' style data collection experiment is in progress to model users' underlying intentions when communicating with the dialogue component of a tourist domain gps. corresponding interfaces and a belief net will be incorporated into the knowledge modelling module. while the current focus is on decision making , a fuller implementation of smartkom will enable it to query the user for desired information. stemming information is connected to the lexicon through the use of knowledge bases from verbmobil.
1
professor f: so you think we 're going now , yes ? ok , good . alright going again so we 're gon na go around as before , and do our digits . transcript one three one dash one three zero . three two three four seven six five three one six two four one six seven eight nine zero nine four zero three zero one five eight one seven three five three two six eight zero three six two four three zero seven four five zero six nine four seven four eight five seven nine six one five o seven eight o two zero nine six zero four zero one two , you do n't actually n need to say the name . professor f: not that there 's anything defamatory about eight five seven or anything , anyway . so here 's what i have for i was just jotting down things th w that we should do today . this is what i have for an agenda so far , we should talk a little bit about the plans for the field trip next week . a number of us are doing a field trip to ogi and mostly first though about the logistics for it . then maybe later on in the meeting we should talk about what we actually , might accomplish . , in and go around see what people have been doing talk about that , a r progress report . , essentially . and then another topic i had was that dave here had said " give me something to do . " and i have failed so far in doing that . and so maybe we can discuss that a little bit . if we find some holes in some things that someone could use some help with , he 's volunteering to help . professor f: ok , always count on a serious comment from that corner . so , , and , then , talk a little bit about disks and resource issues that 's starting to get worked out . and then , anything else anybody has that is n't in that list ? grad d: i was just wondering , does this mean the battery 's dying and i should change it ? professor f: it looks full of electrons . ok . plenty of electrons left there . ok , so , i wanted to start this with this mundane thing . i it was my bright idea to have us take a plane that leaves at seven twenty in the morning . professor f: this is the reason i did it was because otherwise for those of us who have to come back the same day it is really not much of a visit . so the issue is how would we ever accomplish that ? what part of town do you live in ? professor f: ok , so would it be easier those of you who are not , used to this area , it can be very tricky to get to the airport at , six thirty . so . would it be easier for you if you came here and i drove you ? , ok . professor f: , i ' m afraid we need to do that to get there on time . professor f: , so i 'll just pull up in front at six and just be out front . and , and , that 'll be plenty of time . it 'll take it wo n't be bad traffic that time of day and professor f: bridge , the turnoff to the bridge wo n't even do that . , just go down martin luther king . professor f: and then martin luther king to nine - eighty to eight - eighty , and it 's it 'd take us , tops thirty minutes to get there . professor f: so that leaves us fifty minutes before the plane it 'll just so great , ok so that 'll it 's , it 's still not going to be really easy but particularly for barry and me , we 're not staying overnight so we do n't need to bring anything particularly except for a pad of paper and so , and , you , two have to bring a little bit but , do n't bring a footlocker and we 'll be ok professor f: w you 're staying overnight . i figured you would n't need a great big suitcase , professor f: six am in front . , i 'll be here . i 'll give you my phone number , if i ' m not here for a few m after a few minutes then professor f: nah , i 'll be fine . , it for me it just means getting up a half an hour earlier than i usually do . not not a lot , professor f: so that was the real important . , i figured maybe on the potential goals for the meeting until we talk about wh what 's been going on . so , what 's been going on ? why do n't we start over here . phd g: . , preparation of the french test data actually . so , it means that , it is , a digit french database of microphone speech , downsampled to eight kilohertz and i ' ve added noise to one part , with the actually the aurora - two noises . and , @ so this is a training part . and then the remaining part , i use for testing and with other noises . so we can so this is almost ready . i ' m preparing the htk baseline for this task . and , . professor f: , so the htk base lines so this is using mel cepstra and so on , or ? . ok . , i the p the plan is , to then given this what 's the plan again ? professor f: with so does i just remind me of what you were going to do with the what 's y you just described what you ' ve been doing . so if you could remind me of what you 're going to be doing . , this is , . phd g: we actually we want to , mmm , , analyze three dimensions , the feature dimension , the training data dimension , and the test data dimension . , what we want to do is first we have number for each task . so we have the , ti - digit task , the italian task , the french task and the finnish task . so we have numbers with systems neural networks trained on the task data . and then to have systems with neural networks trained on , data from the same language , if possible , with , using a more generic database , which is phonetically balanced , and . professor f: so - so we had talked i we had talked at one point about maybe , the language id corpus ? phd g: ye - , but , these corpus , w there is a callhome and a callfriend also , the callfriend is for language ind identification . , anyway , these corpus are all telephone speech . so , . this could be a problem for why ? because , the speechdat databases are not telephone speech . they are downsampled to eight kilohertz but they are not with telephone bandwidth . professor f: that 's really funny is n't it ? cuz th this whole thing is for developing new standards for the telephone . phd g: , but the idea is to compute the feature before the before sending them to the , you do n't do not send speech , you send features , computed on th the device , professor f: i see , so your point is that it 's the features are computed locally , and so they are n't necessarily telephone bandwidth , or telephone distortions . professor f: i said @ , there 's an ogi language id , not the , the callfriend is a , ldc w thing , right ? phd g: yea - , there are also two other databases . one they call the multi - language database , and another one is a twenty - two language , something like that . but it 's also telephone speech . professor f: , we ' r e the bandwidth should n't be such an issue right ? because e this is downsampled and filtered , so it 's just the fact that it 's not telephone . and there are so many other differences between these different databases . some of this 's recorded in the car , and some of it 's there 's many different acoustic differences . so i ' m not if . , unless we 're going to include a bunch of car recordings in the training database , i ' m not if it 's completely rules it out if our if we if our major goal is to have phonetic context and you figure that there 's gon na be a mismatch in acoustic conditions does it make it much worse f to add another mismatch , if you will . professor f: , i i the question is how important is it to for us to get multiple languages , in there . phd g: , but - . , actually , for the moment if we w do not want to use these phone databases , we already have english , spanish and french , with microphone speech . so . professor f: so that 's what you 're thinking of using is the multi the equivalent of the multiple ? phd g: , this , actually , these three databases are generic databases . so w f for italian , which is close to spanish , french and , i , ti - digits we have both , digits training data and also more general training data . so . mmm . professor f: , we also have this broadcast news that we were talking about taking off the disk , which is microphone data for english . professor f: right . , so there 's plenty of around . ok , so anyway , th the basic plan is to , test this cube . yes . phd g: , and perhaps , we were thinking that perhaps the cross - language issue is not , so big of a issue . , w we perhaps we should not focus too much on that cross - language . , training a net on a language and testing a for another language . phd g: mmm . perhaps the most important is to have neural networks trained on the target languages . but , with a general database general databases . u so that th , the guy who has to develop an application with one language can use the net trained o on that language , or a generic net , professor f: so , if you 're talking about for producing these discriminative features that we 're talking about you ca n't do that . because because the what they 're asking for is a feature set . right ? and so , we 're the ones who have been weird by doing this training . but if we say , " no , you have to have a different feature set for each language , " this is ver gon na be very bad . professor f: . , in principle , conceptually , it 's like they want a re @ , they want a replacement for mel cepstra . so , we say " ok , this is the year two thousand , we ' ve got something much better than mel cepstra . it 's , gobbledy - gook . " ok ? and so we give them these gobbledy - gook features but these gobbledy - gook features are supposed to be good for any language . cuz you who 's gon na call , and , so it 's , how do what language it is ? somebody picks up the phone . so thi this is their image . someone picks up the phone , right ? professor f: but , no but , y you pick up the phone , you talk on the phone , professor f: and it sends features out . so the phone does n't a what your language is . professor f: but that 's the image they have , so that 's , one could argue all over the place about how things really will be in ten years . but the particular image that the cellular industry has right now is that it 's distributed speech recognition , where the , probabilistic part , and s semantics and are all on the servers , and you compute features of the , on the phone . so that 's what we 're involved in . we might or might not agree that 's the way it will be in ten years , but that 's what they 're asking for . so so that th it is an important issue whether it works cross - language . now , it 's the ogi , folks ' perspective right now that probably that 's not the biggest deal . and that the biggest deal is the , envir acoustic - environment mismatch . and they may very be right , but i was hoping we could just do a test and determine if that was true . if that 's true , we do n't need to worry so much . maybe maybe we have a couple languages in the training set and that gives us enough breadth , that the rest does n't matter . , the other thing is , this notion of training to which i they 're starting to look at up there , training to something more like articulatory features . , and if you have something that 's just good for distinguishing different articulatory features that should just be good across , a wide range of languages . , but , so i do n't th i know unfortunately i do n't i see what you 're comi where you 're coming from , but i do n't think we can ignore it . phd g: so we really have to do test with a real cross - language . , tr training on english and testing on italian , or or we can train or else , can we train a net on , a range of languages and which can include the test @ the target language , professor f: there 's , this is complex . so , ultimately , as i was saying , it does n't fit within their image that you switch nets based on language . now , can you include , the target language ? , from a purist 's standpoint it 'd be not to because then you can say when because surely someone is going to say at some point , " ok , so you put in the german and the finnish . , now , what do you do , when somebody has portuguese ? " , and , however , you are n't it is n't actually a constraint in this evaluation . so i would say if it looks like there 's a big difference to put it in , then we 'd make note of it , and then we probably put in the other , because we have so many other problems in trying to get things to work here that , it 's not so bad as long as we note it and say , " look , we did do this " . phd a: and so , ideally , what you 'd wanna do is you 'd wanna run it with and without the target language and the training set for a wide range of languages . phd a: and that way you can say , " , " we 're gon na build it for what we think are the most common ones " , but if that somebody uses it with a different language , " here 's what 's you 're l here 's what 's likely to happen . " professor f: , cuz the truth is , is that it 's not like there are , al although there are thousands of languages , from , the point of view of cellular companies , there are n't . professor f: there 's , there 's fifty , so , an and they are n't , with the exception of finnish , which i it 's pretty different from most things . , it 's , most of them are like at least some of the others . and so , our that spanish is like italian , and so on . i finnish is a little bit like hungarian , supposedly , professor f: or is , i kn , i know that h , i ' m not a linguist , but i hungarian and finnish and one of the languages from the former soviet union are in this same family . but they 're just these , countries that are pretty far apart from one another , have i , people rode in on horses and brought their grad c: , ok . , let 's see , i spent the last week , looking over stephane 's shoulder . and and understanding some of the data . i re - installed , htk , the free version , so , everybody 's now using three point o , which is the same version that , ogi is using . grad c: so , without any licensing big deals , or anything like that . and , so we ' ve been talking about this , cube thing , and it 's beginning more and more looking like the , the borge cube thing . it 's really gargantuan . , but i ' m am i grad c: exactly . , so i ' ve been looking at , timit . , the that we ' ve been working on with timit , trying to get a , a labels file so we can , train up a net on timit and test , the difference between this net trained on timit and a net trained on digits alone . , and seeing if it hurts or helps . anyway . professor f: , when y just to clarify , when you 're talking about training up a net , you 're talking about training up a net for a tandem approach ? grad c: , the inputs are one dimension of the cube , which , we ' ve talked about it being , plp , m f c cs , j - jrasta , jrasta - lda grad c: , i have n't decided on the initial thing . probably probably something like plp . professor f: . , so you take plp and you , do it , you , use htk with it with the transformed features using a neural net that 's trained . and the training could either be from digits itself or from timit . professor f: and that 's the and , and th and then the testing would be these other things which might be foreign language . i see . i get in the picture about the cube . professor f: , those listening to this will not have a picture either , so , i ' m not any worse off . but but at some point somebody should just show me the cube . it sounds s i get the general idea of it , phd a: so , when you said that you were getting the labels for timit , are y what do you mean by that ? grad c: b may , i ' m just , transforming them from the , the standard timit transcriptions into a long huge p - file to do training . phd a: i was just wondering because that test you 're t you 're doing this test because you want to determine whether or not , having s general speech performs as having specific speech . professor f: , especially when you go over the different languages again , because you 'd the different languages have different words for the different digits , phd a: , so i was just wondering if the fact that timit you 're using the hand - labeled from timit might be confuse the results that you get . phd a: right , but if it 's better , it may be better because it was hand - labeled . professor f: , i i ' m sounding cavalier , but , you have , a bunch of labels and they 're han hand - marked . , i , actually , timit was not entirely hand - marked . it was automatically first , and then hand - corrected . professor f: but but , , it might be a better source . so , i it 's you 're right . it would be another interesting scientific question to ask , " is it because it 's a broad source or because it was , carefully ? " . and that 's something you could ask , but given limited time , the main thing is if it 's a better thing for going across languages on this training tandem system , then it 's probably grad c: , right . , there 's a mapping from the sixty - one phonemes in timit to fifty - six , the icsi fifty - six . grad c: and then the digits phonemes , there 's about twenty - two or twenty - four of them ? is that right ? phd g: but , actually , the issue of phoneti phon phone phoneme mappings will arise when we will do severa use several languages because you , some phonemes are not , in every languages , and so we plan to develop a subset of the phonemes , that includes , all the phonemes of our training languages , and use a network with one hundred outputs like that . phd e: i th i looks the sampa phone . sampa phone ? for english american english , and the language who have more phone are the english . of the these language . but n , in spain , the spanish have several phone that d does n't appear in the e english and we thought to complete . but for that , it needs we must r h do a lot of work because we need to generate new tran transcription for the database that we have . phd b: other than the language , is there a reason not to use the timit phone set ? cuz it 's larger ? as opposed to the icsi phone set ? grad c: you mean why map the sixty - one to the fifty - six ? i . i have professor f: , i forget if that happened starting with you , or was it o or if it was eric , afterwards who did that . but , there were several of the phones that were just hardly ever there . phd a: and some of them , they were making distinctions between silence at the end and silence at the beginning , when really they 're both silence . i th it was things like that got it mapped down to fifty - six . professor f: , especially in a system like ours , which is a discriminative system . , you 're really asking this net to learn . it 's it 's hard . phd a: there 's not much difference , really . and the ones that are gone , are there was they also in timit had like a glottal stop , which was a short period of silence , professor f: i it 's actually pretty common that a lot of the recognition systems people use have things like , say thirty - nine , phone symbols , and then they get the variety by bringing in the context , the phonetic context . so we actually have an unusually large number in what we tend to use here . so , a actually maybe now you ' ve got me intrigued . what there 's can you describe what 's on the cube ? grad c: and maybe we could sections in the cube for people to work on . , do you wanna do it ? professor f: so even though the meeting recorder does n't , and since you 're not running a video camera we wo n't get this , but if you use a board it 'll help us anyway . , point out one of the limitations of this medium , but you ' ve got the wireless on , grad c: the first dimension is the features that we 're going to use . and the second dimension , is the training corpus . and that 's the training on the discriminant neural net . and the last dimension happens to be professor f: . so the training for htk is always that 's always set up for the individual test , that there 's some training data and some test data . so that 's different than this . grad c: right , right . this is this is for ann only . and , the training for the htk models is always , fixed for whatever language you 're testing on . and then , there 's the testing corpus . so , then it 's probably instructive to go and show you the features that we were talking about . so , let 's see . help me out with phd g: so , something like , s tct within bands and . and then multi - band after networks . meaning that we would have , neural networks , discriminant neural networks for each band . and using the outputs of these networks or the linear outputs like that . phd a: what about mel cepstrum ? or is that you do n't include that because it 's part of the base ? professor f: , y you do have a baseline system that 's m that 's mel cepstra , professor f: probably should . at least conceptually , it does n't meant you actually have to do it , conceptually it makes sense as a base line . phd a: it 'd be an interesting test just to have just to do mfcc with the neural net phd a: and everything else the same . compare that with just m - mfcc without the net . grad c: i think dan did some of that . , in his previous aurora experiments . and with the net it 's wonderful . without the net it 's just baseline . professor f: , ogi folks have been doing that , too . d because that for a bunch of their experiments they used , mel cepstra , actually . , that 's there and this is here and so on . ok ? grad c: , for the training corpus , we have , the d digits { nonvocalsound } from the various languages . , english spanish , french what else do we have ? phd g: so english , finnish and italian are aurora . and spanish and french is something that we can use in addition to aurora . professor f: probably so . , herve always insists that belgian is i is pure french , has nothing to do with but he says those parisians talk funny . grad c: and then we have , , broader corpus , like timit . timit so far , grad c: , ti - digits all these aurora f d data p data is from is derived from ti - digits . , they corrupted it with , different kinds of noises at different snr levels . professor f: y and stephane was saying there 's some broader s material in the french also ? phd b: did the aurora people actually corrupt it themselves , or just specify the signal and the signal - t grad c: they they corrupted it , themselves , but they also included the noise files for us , or so we can go ahead and corrupt other things . professor f: i ' m just curious , carmen , i could n't tell if you were joking or i is it is it mexican spanish , professor f: or is it , no . it 's it 's spanish from spain , spanish . professor f: alright . spanish from spain . , we 're really covered there now . ok . and the french from france . professor f: so - s so it 's not really from the us either . is that ? ok . grad c: and , with within the training corporas , we 're , thinking about , training with noise . incorporating the same kinds of noises that , aurora is in incorporating in their , in their training corpus . , i do n't think we 're given the , the unseen noise conditions , though , professor f: what they were saying was that , for this next test there 's gon na be some of the cases where they have the same type of noise as you were given before hand and some cases where you 're not . professor f: so , presumably , that 'll be part of the topic of analysis of the test results , is how you do when it 's matching noise and how you do where it 's not . that 's right . professor f: ok . , i it does seem to me that a lot of times when you train with something that 's at least a little bit noisy it can help you out in other kinds of noise even if it 's not matching just because there 's some more variance that you ' ve built into things . but , but , , exactly how it will work will depend on how near it is to what you had ahead of time . so . ok , so that 's your training corpus , and then your testing corpus ? grad c: , the testing corporas are , just , the same ones as aurora testing . and , that includes , the english spa - , italian . finnish . grad c: , we ' r we 're gon na get german , ge - at the final test will have german . professor f: , so , the final test , on a , is supposed to be german and danish , phd g: and , according to hynek it will be we will have this at the end of november , professor f: something like seven things in each , each column . so that 's , three hundred and forty - three , different systems that are going to be developed . there 's three of you . grad c: there - there 's three tests . type - a , type - b , and type - c . and they 're all gon na be test tested , with one training of the htk system . , there 's a script that tests all three different types of noise conditions . test - a is like a matched noise . test - b is a slightly mismatched . and test - c is a , mismatched channel . grad c: we 're gon na be , training on the noise files that we do have . professor f: so i the question is how long does it take to do a training ? , it 's not crazy t , these are a lot of these are built - in things and we know we have programs that compute plp , we have msg , we have jra , a lot of these things will just happen , wo n't take a huge amount of development , it 's just trying it out . so , we actually can do quite a few experiments . but how long does it take , do we think , for one of these trainings ? professor f: good point . a major advantage of msg , i see , th that we ' ve seen in the past is combined with plp . professor f: you do n't wanna , let 's see , seven choose two would be , twenty - one different combinations . professor f: so plp and msg we definitely wanna try cuz we ' ve had a lot of good experience with putting those together . phd a: when you do that , you 're increasing the size of the inputs to the net . do you have to reduce the hidden layer , ? phd a: no , no , i ' m just wondering about number of parameters in the net . do you have to worry about keeping that the same , phd b: is n't there like a limit on the computation load , or d latency , like that for aurora task ? professor f: we have n't talked about any of that , have we ? so , there 's not really a limit . what it is that there 's , it 's just penalty , that that if you 're using , a megabyte , then they 'll say that 's very , but , it will never go on a cheap cell phone . and , u , the computation is n't so much of a problem . it 's more the memory . , and , expensive cell phones , exa expensive hand - helds , and , are gon na have lots of memory . so it 's just that , these people see the cheap cell phones as being still the biggest market , but , i was just realizing that , actually , it does n't explode out , it 's not really two to the seventh . but it 's but i it does n't really explode out the number of trainings cuz these were all trained individually . , if you have all of these nets trained some place , then , you can combine their outputs and do the kl transformation and , so , what it blows out is the number of testings . and , and the number of times you do that last part . but that last part , is so has got ta be pretty quick , it 's just running the data through phd a: is that just separate nets for each language then combined , or is that actually one net trained on ? professor f: one would think one net , but we ' ve i do n't think we ' ve tested that . phd g: so , in the broader training corpus we can use , the three , or , a combination of two languages . professor f: so , i the first thing is if w if we know how much a how long a training takes , if we can train up all these combinations , then we can start working on testing of them individually , and in combination . because the putting them in combination , is not as much computationally as the r training of the nets in the first place . so y you do have to compute the kl transformation . which is a little bit , but it 's not too much . phd g: but . but there is the testing also , which implies training , the htk models professor f: i what ravioli is . is it is it an ultra - five , or is it a ? phd g: it 's - it 's not so long because , the ti - digits test data is about , how many hours ? , th , thirty hours of speech , professor f: so , clearly , there 's no way we can even begin to do an any significant amount here unless we use multiple machines . professor f: so so w we there 's plenty of machines here and they 're n they 're often not in a great deal of use . so , it 's key that the that you look at , , what machines are fast , what machines are used a lot , are we still using p - make ? is that ? professor f: , you have a , once you get the basic thing set up , you have just all the , a all these combinations , it 's it 's let 's say it 's six hours or eight hours , for the training of htk . how long is it for training of , the neural net ? professor f: again , we do have a bunch of spert boards . and there 's you folks are probably go the ones using them right now . grad c: ad - adam did some testing . or either adam or dan did some testing and they found that the spert board 's still faster . and the benefits is that , you run out of spert and then you can do other things on your computer , and you do n't professor f: so you could be we have quite a few spert boards . you could set up , , ten different jobs , to run on spert different spert boards and have ten other jobs running on different computers . so , it 's got to take that thing , or we 're not going to get through any significant number of these . so this is , i like this because what it no , no , what i like about it is we do have a problem that we have very limited time . so , with very limited time , we actually have really quite a bit of computational resource available if you , get a look across the institute and how little things are being used . on the other hand , almost anything that really i , is new , where we 're saying , " , let 's look at , like we were talking before about , voiced - unvoiced - silence detection features and all those sort " that 's it 's a great thing to go to . but if it 's new , then we have this development and learning process t to go through on top of just the all the work . so , i do n't see how we 'd do it . so what i like about this is you have listed all the things that we already know how to do . and and all the kinds of data that we , at this point , already have . and , you 're just saying let 's look at the outer product of all of these things and see if we can calculate them . a am i am i interpreting this correctly ? is this what you 're thinking of doing in the short term ? so so then it 's just the missing piece is that you need to , you know , talk to , chuck , talk to , adam , sort out about , what 's the best way to really , attack this as a mass problem in terms of using many machines . and , then , set it up in terms of scripts and , in kind o some structured way . , and , when we go to , ogi next week , we can then present to them , what it is that we 're doing . and , we can pull things out of this list that we think they are doing sufficiently , professor f: that , we 're not we wo n't be contributing that much . then , like , we 're there . grad c: , for the for nets trained on digits , we have been using , four hundred order hidden units . and , for the broader class nets we 're going to increase that because the , the digits nets only correspond to about twenty phonemes . so . professor f: it 's not actually broader class , it 's actually finer class , but you mean y you mean more classes . professor f: carmen , did you do you have something else to add ? we you have n't talked too much , phd e: d i begin to work with the italian database to nnn , to with the f front - end and with the htk program and the @ . and i trained , with the spanish two neural network with plp and with lograsta plp . i exactly what is better if lograsta or jrasta . professor f: , jrasta has the potential to do better , but it does n't always . it 's i jrasta is more complicated . it 's it 's , instead of doing rasta with a log , you 're doing rasta with a log - like function that varies depending on a j parameter , which is supposed to be sensitive to the amount of noise there is . so , it 's like the right transformation to do the filtering in , is dependent on how much noise there is . professor f: and so in jrasta you attempt to do that . it 's a little complicated because once you do that , you end up in some funny domain and you end up having to do a transformation afterwards , which requires some tables . and , so it 's a little messier , there 's more ways that it can go wrong , but if you 're careful with it , it can do better . phd e: , and to recognize the italian digits with the neural netw spanish neural network , and also to train another neural network with the spanish digits , the database of spanish digits . and i working that . but prepa to prepare the database are difficult . was for me , n it was a difficult work last week with the labels because the program with the label obtained that i have , the albayzin , is different w to the label to train the neural network . and that is another work that we must to do , to change . phd e: , albayzin database was labeled automatically with htk . it 's not hand it 's not labels by hand . professor f: so let 's start over . so , ti timi timit 's hand - labeled , and you 're saying about the spanish ? phd e: the spanish labels ? that was in different format , that the format for the program to train the neural network . phd e: it 's it 's but n yes , because they have one program , feacalc , but no , l labecut , but do n't does n't , include the htk format to convert . and , i what . i ask e even i ask to dan ellis what do that , and h they he say me that h he does does n't any s any form to do that . and at the end , that with labecut transfer to ascii format , and htk is an ascii format . and i m do another , one program to put ascii format of htk to ase ay ac ascii format to exceed and they used labcut to pass . actually that was complicated , professor f: so it 's just usual sometimes say housekeeping , to get these things sorted out . so it seems like there 's some peculiarities of the , of each of these dimensions that are getting sorted out . and then , if you work on getting the , assembly lines together , and then the pieces get ready to go into the assembly line and gradually can start , start turning the crank , more or less . we have a lot more computational capability here than they do at ogi , so that i if what 's what 's great about this is it sets it up in a very systematic way , so that , once these all of these , mundane but real problems get sorted out , we can just start turning the crank and push all of us through , and then finally figure out what 's best . grad c: i was thinking two things . , the first thing was , we actually had thought of this as like , not in stages , but more along the time axis . just like one stream at a time , je - je check out the results and go that way . professor f: no , i ' m just saying , i ' m just thinking of it like loops , and so , y if you had three nested loops , that you have a choice for this , and a choice for that , and you 're going through them all . that that 's what i meant . professor f: and , that once you get a better handle on how much you can realistically do , , concurrently on different machines , different sperts , and you see how long it takes on what machine and , you can stand back from it and say , " ok , if we look these combinations we 're talking about , and combinations of combinations , and , " you 'll probably find you ca n't do it all . so then at that point , we should sort out which ones do we throw away . which of the combinations across , what are the most likely ones , i still think we could do a lot of them . , it would n't surprise me if we could do a hundred of them . but , probably when you include all the combinations , you 're actually talking about a thousand of them , and that 's probably more than we can do . but a hundred is a lot . and and , grad c: and the second thing was about scratch space . and you sent an email about , e scratch space for people to work on . and i know that , stephane 's working from an nt machine , so his home directory exists somewhere else . professor f: his his is somewhere else , my point i want to for bring it back to that . my th i want to clarify my point about that chuck repeated in his note . we 're over the next year or two , we 're gon na be upgrading the networks in this place , but right now they 're still all te all ten megabit lines . and we have reached the this the machines are getting faster and faster . so , it actually has reached the point where it 's a significant drag on the time for something to move the data from one place to another . so , you do n't w especially in something with repetitive computation where you 're going over it multiple times , you do do n't want to have the data that you 're working on distant from where it 's being where the computation 's being done if you can help it . now , we are getting more disk for the central file server , which , since it 's not a computational server , would seem to be a contradiction to what said . but the idea is that , suppose you 're working with , this big bunch of multi multilingual databases . you put them all in the central ser at the cen central file server . then , when you 're working with something and accessing it many times , you copy the piece of it that you 're working with over to some place that 's close to where the computation is and then do all the work there . and then that way you wo n't have the network you wo n't be clogging the network for yourself and others . that 's the idea . so , it 's gon na take us it may be too late for this , p precise crunch we 're in now , but , we 're , it 's gon na take us a couple weeks at least to get the , the amount of disk we 're gon na be getting . we 're actually gon na get , four more , thirty - six gigabyte drives and , put them on another disk rack . we ran out of space on the disk rack that we had , so we 're getting another disk rack and four more drives to share between , primarily between this project and the meetings project . we ' ve put another i there 's another eighteen gigabytes that 's in there now to help us with the immediate crunch . are you saying so i where you 're stephane , where you 're doing your computations . if i so , you 're on an nt machine , so you 're using some external machine professor f: are these , computational servers , ? i ' m i ' ve been out of it . professor f: unfortunately , these days my idea of running comput of computa doing computation is running a spread sheet . have n't been doing much computing personally , so those are computational servers . so i the other question is what disk there i space there is there on the computational servers . phd a: , i ' m not what 's available on is it you said nutmeg and what was the other one ? professor f: , so , chuck will be the one who will be sorting out what disk needs to be where , and so on , and i 'll be the one who says , " ok , spend the money . " so . which , n these days , if you 're talking about scratch space , it does n't increase the , need for backup , and , it 's not that big a d and the disks themselves are not that expensive . right now it 's phd a: what you can do , when you 're on that machine , is , just go to the slash - scratch directory , and do a df minus k , and it 'll tell you if there 's space available . , and if there is then , professor f: but was n't it , dave was saying that he preferred that people did n't put in slash - scratch . it 's more putting in d s xa or xb or , phd a: , there 's different there , there 's so there 's the slash - x - whatever disks , and then there 's slash - scratch . and both of those two kinds are not backed up . and if it 's called " slash - scratch " , it means it 's probably an internal disk to the machine . and so that 's the thing where , like if , ok , if you do n't have an nt , but you have a unix workstation , and they attach an external disk , it 'll be called " slash - x - something " , if it 's not backed up and it 'll be " slash - d - something " if it is backed up . and if it 's inside the machine on the desk , it 's called " slash - scratch " . but the problem is , if you ever get a new machine , they take your machine away . it 's easy to unhook the external disks , put them back on the new machine , but then your slash - scratch is gone . so , you do n't wanna put anything in slash - scratch that you wanna keep around for a long period of time . but if it 's a copy of , say , some data that 's on a server , you can put it on slash - scratch because , first of all it 's not backed up , and second it does n't matter if that machine disappears and you get a new machine because you just recopy it to slash - scratch . so tha that 's why i was saying you could check slash - scratch on those on , mustard and nutmeg to see if there 's space that you could use there . you could also use slash - x - whatever disks on mustard and nutmeg . and we do have so you , it 's better to have things local if you 're gon na run over them lots of times so you do n't have to go to the network . professor f: right , so es so especially if you 're right , if you 're taking some piece of the training corpus , which usually resides in where chuck is putting it all on the , file server , then , it 's fine if it 's not backed up because if it g gets wiped out , y it is backed up on the other disk . phd a: , so , one of the things that i need to i ' ve started looking at , is this the appropriate time to talk about the disk space ? i ' ve started looking at , disk space . dan david , put a new , drive onto abbott , that 's an x disk , which means it 's not backed up . so , i ' ve been going through and copying data that is , some corpus usually , that we ' ve got on a cd - rom , onto that new disk to free up space on other disks . and , so far , i ' ve copied a couple of carmen 's , databases over there . we have n't deleted them off of the slash - dc disk that they 're on right now in abbott , but we i would like to go through sit down with you about some of these other ones and see if we can move them onto , this new disk also . there 's there 's a lot more space there , phd a: and it 'll free up more space for doing the experiments and things . so , anything that you do n't need backed up , we can put on this new disk . , but if it 's experiments and you 're creating files and things that you 're gon na need , you probably wanna have those on a disk that 's backed up , just in case something goes wrong . so far i ' ve copied a couple of things , but i have n't deleted anything off of the old disk to make room yet . and i have n't looked at the any of the aurora , except for the spanish . so i 'll need to get together with you and see what data we can move onto the new disk . professor f: an another question occurred to me is what were you folks planning to do about normalization ? phd g: so , so that this could be another dimension , but we think perhaps we can use the best , , normalization scheme as ogi is using , so , with parameters that they use there , professor f: it 's i we seem to have enough dimensions as it is . so probably if we take their probably the on - line normalization because then it 's if we do anything else , we 're gon na end up having to do on - line normalization too , so we may as just do on - line normalization . so that it 's plausible for the final thing . good . so , i , th the other topic i maybe we 're already there , or almost there , is goals for the for next week 's meeting . i it seems to me that we wanna do is flush out what you put on the board here . , maybe , have it be somewhat visual , a little bit . professor f: so w we can say what we 're doing , and , also , if you have sorted out , this information about how long i roughly how long it takes to do on what and , what we can how many of these trainings , and testings and that we can realistically do , then one of the big goals of going there next week would be to actually settle on which of them we 're gon na do . and , when we come back we can charge in and do it . anything else that i a actually started out this field trip started off with , stephane talking to hynek , so you may have had other goals , for going up , and any anything else you can think of would be we should think about accomplishing ? , i ' m just saying this because maybe there 's things we need to do in preparation . professor f: and and the other the last topic i had here was , d dave 's fine offer to , do something on this . he 's doing he 's working on other things , but to do something on this project . so the question is , " where where could we , most use dave 's help ? " phd g: i was thinking perhaps if , additionally to all these experiments , which is not really research , it 's , running programs and , trying to have a closer look at the perhaps the , speech , noise detection or , voiced - sound - unvoiced - sound detection and which could be important in i for noise phd a: that would be a that 's a big deal . because the , the thing that sunil was talking about , with the labels , labeling the database when it got to the noisy ? the that that really throws things off . , having the noise all of a sudden , your , speech detector , the , what was it ? what was happening with his thing ? he was running through these models very quickly . he was getting lots of , insertions , is what it was , in his recognitions . professor f: the only problem , maybe that 's the right thing the only problem i have with it is exactly the same reason why you thought it 'd be a good thing to do . , that let 's fall back to that . but the first responsibility is to figure out if there 's something that , an additional that 's a good thing you remove the mike . go ahead , good . what an additional clever person could help with when we 're really in a crunch for time . cuz dave 's gon na be around for a long time , he 's he 's gon na be here for years . and so , over years , if he 's interested in , voiced - unvoiced - silence , he could do a lot . but if there 's something else that he could be doing , that would help us when we 're strapped for time we have we ' ve , only , another month or two to , with the holidays in the middle of it , to get a lot done . if we can think of something some piece of this that 's going to be the very fact that it is just work , and i and it 's running programs and , is exactly why it 's possible that it some piece of could be handed to someone to do , because it 's not so that 's the question . and we do n't have to solve it right this s second , but if we could think of some piece that 's defined , that he could help with , he 's expressing a will willingness to do that . phd e: yes , maybe to , mmm , put together the label the labels between timit and spanish like that . professor f: that 's something that needs to be done in any event . so what we were just saying is that , i was arguing for , if possible , coming up with something that really was development and was n't research because we 're we have a time crunch . and so , if there 's something that would save some time that someone else could do on some other piece , then we should think of that first . see the thing with voiced - unvoiced - silence is i really think that it 's to do a poor job is pretty quick , or , a so - so job . you can you can throw in a couple fea we kinds of features help with it . you can throw something in . you can do pretty . but i remember , when you were working on that , and you worked on for few months , as i recall , and you got to , say ninety - three percent , and getting to ninety - four really hard . professor f: and th the other tricky thing is , since we are , even though we 're not we do n't have a strict prohibition on memory size , and computational complexity , clearly there 's some limitation to it . so if we have to if we say we have to have a pitch detector , say , if we 're trying to incorporate pitch information , or at least some harmonic harmonicity , this is another whole thing , take a while to develop . anyway , it 's a very interesting topic . , one of the a lot of people would say , and dan would also , that one of the things wrong with current speech recognition is that we really do throw away all the harmonicity information . , we try to get spectral envelopes . reason for doing that is that most of the information about the phonetic identity is in the spectral envelopes are not in the harmonic detail . but the harmonic detail does tell you something . like the fact that there is harmonic detail is real important . so , . so wh that so the other suggestion that just came up was , what about having him work on the , multilingual super f superset thing . , coming up with that and then , training it training a net on that , say , from timit . is that or , for multiple databases . what what would you think it would wh what would this task consist of ? phd g: , it would consist in , , creating the superset , and , modifying the lab labels for matching the superset . professor f: and then creating i m changing labels on timit ? or on or on multiple language multiple languages ? grad c: there 's , carmen was talking about this sampa thing , and it 's , it 's an effort by linguists to come up with , a machine readable ipa , thing , and , they have a web site that stephane was showing us that has , has all the english phonemes and their sampa correspondent , phoneme , and then , they have spanish , they have german , they have all sorts of languages , mapping to the sampa phonemes , which phd e: the tr the transcription , though , for albayzin is n the transcription are of sampa the same , how you say , symbol that sampa appear . professor f: what , has ogi done anything about this issue ? do they have any superset that they already have ? phd g: i do n't . , they 're going actually the other way , defining , phoneme clusters , . professor f: aha . that 's right . , and that 's an interesting way to go too . phd a: so they just throw the speech from all different languages together , then cluster it into sixty or fifty or whatever clusters ? phd g: they ' ve not done it , doing , multiple language yet , but what they did is to training , english nets with all the phonemes , and then training it in english nets with , seventeen , it was seventeen , broad classes . phd g: , so . and , . and the result was that , when testing on cross - language it was better . but hynek did n't add did n't have all the results when he showed me that , so , . professor f: is there 's some way that we should tie into that with this . , if that is a better thing to do , should we leverage that , rather than doing , our own . so , if i if they s , we have i we have the trainings with our own categories . and now we 're saying , " , how do we handle cross - language ? " and one way is to come up with a superset , but they are als they 're trying coming up with clustered , and do we think there 's something wrong with that ? phd g: or , because , for the moment we are testing on digits , and e i perhaps u using broad phoneme classes , it 's ok for , classifying the digits , but as soon as you will have more words , words can differ with only a single phoneme , and which could be the same , class . phd g: , but you will ask the net to put one for th the phoneme class and so . phd a: so you 're saying that there may not be enough information coming out of the net to help you discriminate the words ? phd b: fact , most confusions are within the phone classes , right ? , larry was saying like obstruents are only confused with other obstruents , et cetera . professor f: instead of the superclass thing , which is to take so suppose y you do n't really mark arti to really mark articulatory features , you really wanna look at the acoustics and see where everything is , and we 're not gon na do that . , the second class way of doing it is to look at the , phones that are labeled and translate them into acoustic , articulatory , features . so it wo n't really be right . you wo n't really have these overlapping things and , professor f: you either do that or you have multiple nets . and , i if our software this if the qu versions of the quicknet that we 're using allows for that . do ? professor f: so that 'll work , that 's another thing that could be done is that we could , just translate instead of translating to a superset , just translate to articulatory features , some set of articulatory features and train with that . now the fact even though it 's a smaller number , it 's still fine because you have the , combinations . so , it has every , it had has every distinction in it that you would have the other way . but it should go across languages better . phd a: we could do an interesting cheating experiment with that too . we could i , if you had the phone labels , you could replace them by their articulatory features and then feed in a vector with those , things turned on based on what they 're supposed to be for each phone to see if it if you get a big win . do what i ' m saying ? phd a: , if your net is gon na be outputting , a vector of , it 's gon na have probabilities , but let 's say that they were ones and zeros , then y and for each , i if this for your testing data , but if for your test data , what the string of phones is and you have them aligned , then you can just instead of going through the net , just create the vector for each phone and feed that in to see if that data helps . , what made me think about this is , i was talking with hynek and he said that there was a guy at a t - andt who spent eighteen months working on a single feature . and because they had done some cheating experiments professor f: this was the guy that we were just talking a that we saw on campus . so , this was larry saul who did this . he used sonorants . phd a: , hynek said that , i before they had him work on this , they had done some experiment where if they could get that one feature right , it dramatically improved the result . phd a: so i was thinking , it made me think about this , that if it 'd be an interesting experiment just to see , if you did get all of those right . professor f: should be . because if you get all of them in there , that defines all of the phones . so that 's equivalent to saying that you ' ve got all the phones right . so , if that does n't help , there 's although , it would be make an interesting cheating experiment because we are using it in this funny way , where we 're converting it into features . phd a: and then you also what error they ' ve got on the htk side . ? it gives you your the best you could hope for , . phd b: the soft training of the nets still requires the vector to sum to one , though , phd b: so you ca n't really feed it , like , two articulatory features that are on at the same time with ones cuz it 'll normalize them down to one half like that , . phd b: is n't that what you 'll want ? if you 're gon na do a kl transform on it .
the main topics discussed were arrangements and objectives of an upcoming field trip to visit research partners ogi; a number of members reported their progress to date; if there are any tasks that one member can help others with; an overall description of the cube project , a multi-lingual speech recognition system for use by the cellular phone industry , along with consideration of some of the issues therein , specifically disk and resource issues. essentially the cube consists of three dimension: input features; training corpus; and test corpus. most important concerns are which combinations of features to use , and what combinations of languages and broad/specific corpora to use for the training the group will meet at the building at 6am to go to the airport for their field trip together. speaker me018 needs to discuss files that can be moved with speaker mn007. for the ogi meeting they need to take a clear description of the cube project , and an estimate of how long the entire process should take. at the meeting they should discuss what they will ultimately put through the system. people are to consider what me034 could do on the project to speed things up , though creating the phoneme superset is a possibility. speaker me018 is to look into the machines that mn007 has been running data on to find out what they are. rather than consider level of normalization as a further dimension to the project , whatever ogi finds the best will be used systematically. need to use multiple machines and spert boards to run processes on because they take so long. they will consider looking at articulatory features rather than straight phonemes , though it wouldn't be perfect. it is not clear what combinations of dimensions , which features, should be run in the cube project. it is important to know because the processes are going to be large and processor and memory hungry. to bear in mind is the fact that the cellular industry has an image of speech recognition in that's what they are after. must be careful if using a broad training source that is carefully hand marked , because it would be unclear which is the reason for improvement. memory is of concern , because final product needs to run potentially one cheaper cell phones , which have limited memory capacity. ogi doesn't have a phoneme superset ready prepared , for they are working with clusters , which may be good enough for digits , but not for discriminating words. speaker mn007 has been preparing the french digit database. training and testing with varying noise. speaker me006 has installed updated software for everyone. working on label files from timit for training neural nets. trying to figure out what the input to the cube should be. speaker fn002 has been testing the italian database on a net trained on spanish. she has had problems with incompatible labels though. within the next year , the network is to be upgraded , and in a couple of weeks , the group should have access to 4 new 36 gigabyte file servers. me018 has been copying some corpus stuff to a non-backed up system , but not yet deleted originals. current plan is to use a superset of phones for the cube project derived from the various training languages. htk training currently takes 6 hours to a day , and the neural net takes 1-2 days.
###dialogue: professor f: so you think we 're going now , yes ? ok , good . alright going again so we 're gon na go around as before , and do our digits . transcript one three one dash one three zero . three two three four seven six five three one six two four one six seven eight nine zero nine four zero three zero one five eight one seven three five three two six eight zero three six two four three zero seven four five zero six nine four seven four eight five seven nine six one five o seven eight o two zero nine six zero four zero one two , you do n't actually n need to say the name . professor f: not that there 's anything defamatory about eight five seven or anything , anyway . so here 's what i have for i was just jotting down things th w that we should do today . this is what i have for an agenda so far , we should talk a little bit about the plans for the field trip next week . a number of us are doing a field trip to ogi and mostly first though about the logistics for it . then maybe later on in the meeting we should talk about what we actually , might accomplish . , in and go around see what people have been doing talk about that , a r progress report . , essentially . and then another topic i had was that dave here had said " give me something to do . " and i have failed so far in doing that . and so maybe we can discuss that a little bit . if we find some holes in some things that someone could use some help with , he 's volunteering to help . professor f: ok , always count on a serious comment from that corner . so , , and , then , talk a little bit about disks and resource issues that 's starting to get worked out . and then , anything else anybody has that is n't in that list ? grad d: i was just wondering , does this mean the battery 's dying and i should change it ? professor f: it looks full of electrons . ok . plenty of electrons left there . ok , so , i wanted to start this with this mundane thing . i it was my bright idea to have us take a plane that leaves at seven twenty in the morning . professor f: this is the reason i did it was because otherwise for those of us who have to come back the same day it is really not much of a visit . so the issue is how would we ever accomplish that ? what part of town do you live in ? professor f: ok , so would it be easier those of you who are not , used to this area , it can be very tricky to get to the airport at , six thirty . so . would it be easier for you if you came here and i drove you ? , ok . professor f: , i ' m afraid we need to do that to get there on time . professor f: , so i 'll just pull up in front at six and just be out front . and , and , that 'll be plenty of time . it 'll take it wo n't be bad traffic that time of day and professor f: bridge , the turnoff to the bridge wo n't even do that . , just go down martin luther king . professor f: and then martin luther king to nine - eighty to eight - eighty , and it 's it 'd take us , tops thirty minutes to get there . professor f: so that leaves us fifty minutes before the plane it 'll just so great , ok so that 'll it 's , it 's still not going to be really easy but particularly for barry and me , we 're not staying overnight so we do n't need to bring anything particularly except for a pad of paper and so , and , you , two have to bring a little bit but , do n't bring a footlocker and we 'll be ok professor f: w you 're staying overnight . i figured you would n't need a great big suitcase , professor f: six am in front . , i 'll be here . i 'll give you my phone number , if i ' m not here for a few m after a few minutes then professor f: nah , i 'll be fine . , it for me it just means getting up a half an hour earlier than i usually do . not not a lot , professor f: so that was the real important . , i figured maybe on the potential goals for the meeting until we talk about wh what 's been going on . so , what 's been going on ? why do n't we start over here . phd g: . , preparation of the french test data actually . so , it means that , it is , a digit french database of microphone speech , downsampled to eight kilohertz and i ' ve added noise to one part , with the actually the aurora - two noises . and , @ so this is a training part . and then the remaining part , i use for testing and with other noises . so we can so this is almost ready . i ' m preparing the htk baseline for this task . and , . professor f: , so the htk base lines so this is using mel cepstra and so on , or ? . ok . , i the p the plan is , to then given this what 's the plan again ? professor f: with so does i just remind me of what you were going to do with the what 's y you just described what you ' ve been doing . so if you could remind me of what you 're going to be doing . , this is , . phd g: we actually we want to , mmm , , analyze three dimensions , the feature dimension , the training data dimension , and the test data dimension . , what we want to do is first we have number for each task . so we have the , ti - digit task , the italian task , the french task and the finnish task . so we have numbers with systems neural networks trained on the task data . and then to have systems with neural networks trained on , data from the same language , if possible , with , using a more generic database , which is phonetically balanced , and . professor f: so - so we had talked i we had talked at one point about maybe , the language id corpus ? phd g: ye - , but , these corpus , w there is a callhome and a callfriend also , the callfriend is for language ind identification . , anyway , these corpus are all telephone speech . so , . this could be a problem for why ? because , the speechdat databases are not telephone speech . they are downsampled to eight kilohertz but they are not with telephone bandwidth . professor f: that 's really funny is n't it ? cuz th this whole thing is for developing new standards for the telephone . phd g: , but the idea is to compute the feature before the before sending them to the , you do n't do not send speech , you send features , computed on th the device , professor f: i see , so your point is that it 's the features are computed locally , and so they are n't necessarily telephone bandwidth , or telephone distortions . professor f: i said @ , there 's an ogi language id , not the , the callfriend is a , ldc w thing , right ? phd g: yea - , there are also two other databases . one they call the multi - language database , and another one is a twenty - two language , something like that . but it 's also telephone speech . professor f: , we ' r e the bandwidth should n't be such an issue right ? because e this is downsampled and filtered , so it 's just the fact that it 's not telephone . and there are so many other differences between these different databases . some of this 's recorded in the car , and some of it 's there 's many different acoustic differences . so i ' m not if . , unless we 're going to include a bunch of car recordings in the training database , i ' m not if it 's completely rules it out if our if we if our major goal is to have phonetic context and you figure that there 's gon na be a mismatch in acoustic conditions does it make it much worse f to add another mismatch , if you will . professor f: , i i the question is how important is it to for us to get multiple languages , in there . phd g: , but - . , actually , for the moment if we w do not want to use these phone databases , we already have english , spanish and french , with microphone speech . so . professor f: so that 's what you 're thinking of using is the multi the equivalent of the multiple ? phd g: , this , actually , these three databases are generic databases . so w f for italian , which is close to spanish , french and , i , ti - digits we have both , digits training data and also more general training data . so . mmm . professor f: , we also have this broadcast news that we were talking about taking off the disk , which is microphone data for english . professor f: right . , so there 's plenty of around . ok , so anyway , th the basic plan is to , test this cube . yes . phd g: , and perhaps , we were thinking that perhaps the cross - language issue is not , so big of a issue . , w we perhaps we should not focus too much on that cross - language . , training a net on a language and testing a for another language . phd g: mmm . perhaps the most important is to have neural networks trained on the target languages . but , with a general database general databases . u so that th , the guy who has to develop an application with one language can use the net trained o on that language , or a generic net , professor f: so , if you 're talking about for producing these discriminative features that we 're talking about you ca n't do that . because because the what they 're asking for is a feature set . right ? and so , we 're the ones who have been weird by doing this training . but if we say , " no , you have to have a different feature set for each language , " this is ver gon na be very bad . professor f: . , in principle , conceptually , it 's like they want a re @ , they want a replacement for mel cepstra . so , we say " ok , this is the year two thousand , we ' ve got something much better than mel cepstra . it 's , gobbledy - gook . " ok ? and so we give them these gobbledy - gook features but these gobbledy - gook features are supposed to be good for any language . cuz you who 's gon na call , and , so it 's , how do what language it is ? somebody picks up the phone . so thi this is their image . someone picks up the phone , right ? professor f: but , no but , y you pick up the phone , you talk on the phone , professor f: and it sends features out . so the phone does n't a what your language is . professor f: but that 's the image they have , so that 's , one could argue all over the place about how things really will be in ten years . but the particular image that the cellular industry has right now is that it 's distributed speech recognition , where the , probabilistic part , and s semantics and are all on the servers , and you compute features of the , on the phone . so that 's what we 're involved in . we might or might not agree that 's the way it will be in ten years , but that 's what they 're asking for . so so that th it is an important issue whether it works cross - language . now , it 's the ogi , folks ' perspective right now that probably that 's not the biggest deal . and that the biggest deal is the , envir acoustic - environment mismatch . and they may very be right , but i was hoping we could just do a test and determine if that was true . if that 's true , we do n't need to worry so much . maybe maybe we have a couple languages in the training set and that gives us enough breadth , that the rest does n't matter . , the other thing is , this notion of training to which i they 're starting to look at up there , training to something more like articulatory features . , and if you have something that 's just good for distinguishing different articulatory features that should just be good across , a wide range of languages . , but , so i do n't th i know unfortunately i do n't i see what you 're comi where you 're coming from , but i do n't think we can ignore it . phd g: so we really have to do test with a real cross - language . , tr training on english and testing on italian , or or we can train or else , can we train a net on , a range of languages and which can include the test @ the target language , professor f: there 's , this is complex . so , ultimately , as i was saying , it does n't fit within their image that you switch nets based on language . now , can you include , the target language ? , from a purist 's standpoint it 'd be not to because then you can say when because surely someone is going to say at some point , " ok , so you put in the german and the finnish . , now , what do you do , when somebody has portuguese ? " , and , however , you are n't it is n't actually a constraint in this evaluation . so i would say if it looks like there 's a big difference to put it in , then we 'd make note of it , and then we probably put in the other , because we have so many other problems in trying to get things to work here that , it 's not so bad as long as we note it and say , " look , we did do this " . phd a: and so , ideally , what you 'd wanna do is you 'd wanna run it with and without the target language and the training set for a wide range of languages . phd a: and that way you can say , " , " we 're gon na build it for what we think are the most common ones " , but if that somebody uses it with a different language , " here 's what 's you 're l here 's what 's likely to happen . " professor f: , cuz the truth is , is that it 's not like there are , al although there are thousands of languages , from , the point of view of cellular companies , there are n't . professor f: there 's , there 's fifty , so , an and they are n't , with the exception of finnish , which i it 's pretty different from most things . , it 's , most of them are like at least some of the others . and so , our that spanish is like italian , and so on . i finnish is a little bit like hungarian , supposedly , professor f: or is , i kn , i know that h , i ' m not a linguist , but i hungarian and finnish and one of the languages from the former soviet union are in this same family . but they 're just these , countries that are pretty far apart from one another , have i , people rode in on horses and brought their grad c: , ok . , let 's see , i spent the last week , looking over stephane 's shoulder . and and understanding some of the data . i re - installed , htk , the free version , so , everybody 's now using three point o , which is the same version that , ogi is using . grad c: so , without any licensing big deals , or anything like that . and , so we ' ve been talking about this , cube thing , and it 's beginning more and more looking like the , the borge cube thing . it 's really gargantuan . , but i ' m am i grad c: exactly . , so i ' ve been looking at , timit . , the that we ' ve been working on with timit , trying to get a , a labels file so we can , train up a net on timit and test , the difference between this net trained on timit and a net trained on digits alone . , and seeing if it hurts or helps . anyway . professor f: , when y just to clarify , when you 're talking about training up a net , you 're talking about training up a net for a tandem approach ? grad c: , the inputs are one dimension of the cube , which , we ' ve talked about it being , plp , m f c cs , j - jrasta , jrasta - lda grad c: , i have n't decided on the initial thing . probably probably something like plp . professor f: . , so you take plp and you , do it , you , use htk with it with the transformed features using a neural net that 's trained . and the training could either be from digits itself or from timit . professor f: and that 's the and , and th and then the testing would be these other things which might be foreign language . i see . i get in the picture about the cube . professor f: , those listening to this will not have a picture either , so , i ' m not any worse off . but but at some point somebody should just show me the cube . it sounds s i get the general idea of it , phd a: so , when you said that you were getting the labels for timit , are y what do you mean by that ? grad c: b may , i ' m just , transforming them from the , the standard timit transcriptions into a long huge p - file to do training . phd a: i was just wondering because that test you 're t you 're doing this test because you want to determine whether or not , having s general speech performs as having specific speech . professor f: , especially when you go over the different languages again , because you 'd the different languages have different words for the different digits , phd a: , so i was just wondering if the fact that timit you 're using the hand - labeled from timit might be confuse the results that you get . phd a: right , but if it 's better , it may be better because it was hand - labeled . professor f: , i i ' m sounding cavalier , but , you have , a bunch of labels and they 're han hand - marked . , i , actually , timit was not entirely hand - marked . it was automatically first , and then hand - corrected . professor f: but but , , it might be a better source . so , i it 's you 're right . it would be another interesting scientific question to ask , " is it because it 's a broad source or because it was , carefully ? " . and that 's something you could ask , but given limited time , the main thing is if it 's a better thing for going across languages on this training tandem system , then it 's probably grad c: , right . , there 's a mapping from the sixty - one phonemes in timit to fifty - six , the icsi fifty - six . grad c: and then the digits phonemes , there 's about twenty - two or twenty - four of them ? is that right ? phd g: but , actually , the issue of phoneti phon phone phoneme mappings will arise when we will do severa use several languages because you , some phonemes are not , in every languages , and so we plan to develop a subset of the phonemes , that includes , all the phonemes of our training languages , and use a network with one hundred outputs like that . phd e: i th i looks the sampa phone . sampa phone ? for english american english , and the language who have more phone are the english . of the these language . but n , in spain , the spanish have several phone that d does n't appear in the e english and we thought to complete . but for that , it needs we must r h do a lot of work because we need to generate new tran transcription for the database that we have . phd b: other than the language , is there a reason not to use the timit phone set ? cuz it 's larger ? as opposed to the icsi phone set ? grad c: you mean why map the sixty - one to the fifty - six ? i . i have professor f: , i forget if that happened starting with you , or was it o or if it was eric , afterwards who did that . but , there were several of the phones that were just hardly ever there . phd a: and some of them , they were making distinctions between silence at the end and silence at the beginning , when really they 're both silence . i th it was things like that got it mapped down to fifty - six . professor f: , especially in a system like ours , which is a discriminative system . , you 're really asking this net to learn . it 's it 's hard . phd a: there 's not much difference , really . and the ones that are gone , are there was they also in timit had like a glottal stop , which was a short period of silence , professor f: i it 's actually pretty common that a lot of the recognition systems people use have things like , say thirty - nine , phone symbols , and then they get the variety by bringing in the context , the phonetic context . so we actually have an unusually large number in what we tend to use here . so , a actually maybe now you ' ve got me intrigued . what there 's can you describe what 's on the cube ? grad c: and maybe we could sections in the cube for people to work on . , do you wanna do it ? professor f: so even though the meeting recorder does n't , and since you 're not running a video camera we wo n't get this , but if you use a board it 'll help us anyway . , point out one of the limitations of this medium , but you ' ve got the wireless on , grad c: the first dimension is the features that we 're going to use . and the second dimension , is the training corpus . and that 's the training on the discriminant neural net . and the last dimension happens to be professor f: . so the training for htk is always that 's always set up for the individual test , that there 's some training data and some test data . so that 's different than this . grad c: right , right . this is this is for ann only . and , the training for the htk models is always , fixed for whatever language you 're testing on . and then , there 's the testing corpus . so , then it 's probably instructive to go and show you the features that we were talking about . so , let 's see . help me out with phd g: so , something like , s tct within bands and . and then multi - band after networks . meaning that we would have , neural networks , discriminant neural networks for each band . and using the outputs of these networks or the linear outputs like that . phd a: what about mel cepstrum ? or is that you do n't include that because it 's part of the base ? professor f: , y you do have a baseline system that 's m that 's mel cepstra , professor f: probably should . at least conceptually , it does n't meant you actually have to do it , conceptually it makes sense as a base line . phd a: it 'd be an interesting test just to have just to do mfcc with the neural net phd a: and everything else the same . compare that with just m - mfcc without the net . grad c: i think dan did some of that . , in his previous aurora experiments . and with the net it 's wonderful . without the net it 's just baseline . professor f: , ogi folks have been doing that , too . d because that for a bunch of their experiments they used , mel cepstra , actually . , that 's there and this is here and so on . ok ? grad c: , for the training corpus , we have , the d digits { nonvocalsound } from the various languages . , english spanish , french what else do we have ? phd g: so english , finnish and italian are aurora . and spanish and french is something that we can use in addition to aurora . professor f: probably so . , herve always insists that belgian is i is pure french , has nothing to do with but he says those parisians talk funny . grad c: and then we have , , broader corpus , like timit . timit so far , grad c: , ti - digits all these aurora f d data p data is from is derived from ti - digits . , they corrupted it with , different kinds of noises at different snr levels . professor f: y and stephane was saying there 's some broader s material in the french also ? phd b: did the aurora people actually corrupt it themselves , or just specify the signal and the signal - t grad c: they they corrupted it , themselves , but they also included the noise files for us , or so we can go ahead and corrupt other things . professor f: i ' m just curious , carmen , i could n't tell if you were joking or i is it is it mexican spanish , professor f: or is it , no . it 's it 's spanish from spain , spanish . professor f: alright . spanish from spain . , we 're really covered there now . ok . and the french from france . professor f: so - s so it 's not really from the us either . is that ? ok . grad c: and , with within the training corporas , we 're , thinking about , training with noise . incorporating the same kinds of noises that , aurora is in incorporating in their , in their training corpus . , i do n't think we 're given the , the unseen noise conditions , though , professor f: what they were saying was that , for this next test there 's gon na be some of the cases where they have the same type of noise as you were given before hand and some cases where you 're not . professor f: so , presumably , that 'll be part of the topic of analysis of the test results , is how you do when it 's matching noise and how you do where it 's not . that 's right . professor f: ok . , i it does seem to me that a lot of times when you train with something that 's at least a little bit noisy it can help you out in other kinds of noise even if it 's not matching just because there 's some more variance that you ' ve built into things . but , but , , exactly how it will work will depend on how near it is to what you had ahead of time . so . ok , so that 's your training corpus , and then your testing corpus ? grad c: , the testing corporas are , just , the same ones as aurora testing . and , that includes , the english spa - , italian . finnish . grad c: , we ' r we 're gon na get german , ge - at the final test will have german . professor f: , so , the final test , on a , is supposed to be german and danish , phd g: and , according to hynek it will be we will have this at the end of november , professor f: something like seven things in each , each column . so that 's , three hundred and forty - three , different systems that are going to be developed . there 's three of you . grad c: there - there 's three tests . type - a , type - b , and type - c . and they 're all gon na be test tested , with one training of the htk system . , there 's a script that tests all three different types of noise conditions . test - a is like a matched noise . test - b is a slightly mismatched . and test - c is a , mismatched channel . grad c: we 're gon na be , training on the noise files that we do have . professor f: so i the question is how long does it take to do a training ? , it 's not crazy t , these are a lot of these are built - in things and we know we have programs that compute plp , we have msg , we have jra , a lot of these things will just happen , wo n't take a huge amount of development , it 's just trying it out . so , we actually can do quite a few experiments . but how long does it take , do we think , for one of these trainings ? professor f: good point . a major advantage of msg , i see , th that we ' ve seen in the past is combined with plp . professor f: you do n't wanna , let 's see , seven choose two would be , twenty - one different combinations . professor f: so plp and msg we definitely wanna try cuz we ' ve had a lot of good experience with putting those together . phd a: when you do that , you 're increasing the size of the inputs to the net . do you have to reduce the hidden layer , ? phd a: no , no , i ' m just wondering about number of parameters in the net . do you have to worry about keeping that the same , phd b: is n't there like a limit on the computation load , or d latency , like that for aurora task ? professor f: we have n't talked about any of that , have we ? so , there 's not really a limit . what it is that there 's , it 's just penalty , that that if you 're using , a megabyte , then they 'll say that 's very , but , it will never go on a cheap cell phone . and , u , the computation is n't so much of a problem . it 's more the memory . , and , expensive cell phones , exa expensive hand - helds , and , are gon na have lots of memory . so it 's just that , these people see the cheap cell phones as being still the biggest market , but , i was just realizing that , actually , it does n't explode out , it 's not really two to the seventh . but it 's but i it does n't really explode out the number of trainings cuz these were all trained individually . , if you have all of these nets trained some place , then , you can combine their outputs and do the kl transformation and , so , what it blows out is the number of testings . and , and the number of times you do that last part . but that last part , is so has got ta be pretty quick , it 's just running the data through phd a: is that just separate nets for each language then combined , or is that actually one net trained on ? professor f: one would think one net , but we ' ve i do n't think we ' ve tested that . phd g: so , in the broader training corpus we can use , the three , or , a combination of two languages . professor f: so , i the first thing is if w if we know how much a how long a training takes , if we can train up all these combinations , then we can start working on testing of them individually , and in combination . because the putting them in combination , is not as much computationally as the r training of the nets in the first place . so y you do have to compute the kl transformation . which is a little bit , but it 's not too much . phd g: but . but there is the testing also , which implies training , the htk models professor f: i what ravioli is . is it is it an ultra - five , or is it a ? phd g: it 's - it 's not so long because , the ti - digits test data is about , how many hours ? , th , thirty hours of speech , professor f: so , clearly , there 's no way we can even begin to do an any significant amount here unless we use multiple machines . professor f: so so w we there 's plenty of machines here and they 're n they 're often not in a great deal of use . so , it 's key that the that you look at , , what machines are fast , what machines are used a lot , are we still using p - make ? is that ? professor f: , you have a , once you get the basic thing set up , you have just all the , a all these combinations , it 's it 's let 's say it 's six hours or eight hours , for the training of htk . how long is it for training of , the neural net ? professor f: again , we do have a bunch of spert boards . and there 's you folks are probably go the ones using them right now . grad c: ad - adam did some testing . or either adam or dan did some testing and they found that the spert board 's still faster . and the benefits is that , you run out of spert and then you can do other things on your computer , and you do n't professor f: so you could be we have quite a few spert boards . you could set up , , ten different jobs , to run on spert different spert boards and have ten other jobs running on different computers . so , it 's got to take that thing , or we 're not going to get through any significant number of these . so this is , i like this because what it no , no , what i like about it is we do have a problem that we have very limited time . so , with very limited time , we actually have really quite a bit of computational resource available if you , get a look across the institute and how little things are being used . on the other hand , almost anything that really i , is new , where we 're saying , " , let 's look at , like we were talking before about , voiced - unvoiced - silence detection features and all those sort " that 's it 's a great thing to go to . but if it 's new , then we have this development and learning process t to go through on top of just the all the work . so , i do n't see how we 'd do it . so what i like about this is you have listed all the things that we already know how to do . and and all the kinds of data that we , at this point , already have . and , you 're just saying let 's look at the outer product of all of these things and see if we can calculate them . a am i am i interpreting this correctly ? is this what you 're thinking of doing in the short term ? so so then it 's just the missing piece is that you need to , you know , talk to , chuck , talk to , adam , sort out about , what 's the best way to really , attack this as a mass problem in terms of using many machines . and , then , set it up in terms of scripts and , in kind o some structured way . , and , when we go to , ogi next week , we can then present to them , what it is that we 're doing . and , we can pull things out of this list that we think they are doing sufficiently , professor f: that , we 're not we wo n't be contributing that much . then , like , we 're there . grad c: , for the for nets trained on digits , we have been using , four hundred order hidden units . and , for the broader class nets we 're going to increase that because the , the digits nets only correspond to about twenty phonemes . so . professor f: it 's not actually broader class , it 's actually finer class , but you mean y you mean more classes . professor f: carmen , did you do you have something else to add ? we you have n't talked too much , phd e: d i begin to work with the italian database to nnn , to with the f front - end and with the htk program and the @ . and i trained , with the spanish two neural network with plp and with lograsta plp . i exactly what is better if lograsta or jrasta . professor f: , jrasta has the potential to do better , but it does n't always . it 's i jrasta is more complicated . it 's it 's , instead of doing rasta with a log , you 're doing rasta with a log - like function that varies depending on a j parameter , which is supposed to be sensitive to the amount of noise there is . so , it 's like the right transformation to do the filtering in , is dependent on how much noise there is . professor f: and so in jrasta you attempt to do that . it 's a little complicated because once you do that , you end up in some funny domain and you end up having to do a transformation afterwards , which requires some tables . and , so it 's a little messier , there 's more ways that it can go wrong , but if you 're careful with it , it can do better . phd e: , and to recognize the italian digits with the neural netw spanish neural network , and also to train another neural network with the spanish digits , the database of spanish digits . and i working that . but prepa to prepare the database are difficult . was for me , n it was a difficult work last week with the labels because the program with the label obtained that i have , the albayzin , is different w to the label to train the neural network . and that is another work that we must to do , to change . phd e: , albayzin database was labeled automatically with htk . it 's not hand it 's not labels by hand . professor f: so let 's start over . so , ti timi timit 's hand - labeled , and you 're saying about the spanish ? phd e: the spanish labels ? that was in different format , that the format for the program to train the neural network . phd e: it 's it 's but n yes , because they have one program , feacalc , but no , l labecut , but do n't does n't , include the htk format to convert . and , i what . i ask e even i ask to dan ellis what do that , and h they he say me that h he does does n't any s any form to do that . and at the end , that with labecut transfer to ascii format , and htk is an ascii format . and i m do another , one program to put ascii format of htk to ase ay ac ascii format to exceed and they used labcut to pass . actually that was complicated , professor f: so it 's just usual sometimes say housekeeping , to get these things sorted out . so it seems like there 's some peculiarities of the , of each of these dimensions that are getting sorted out . and then , if you work on getting the , assembly lines together , and then the pieces get ready to go into the assembly line and gradually can start , start turning the crank , more or less . we have a lot more computational capability here than they do at ogi , so that i if what 's what 's great about this is it sets it up in a very systematic way , so that , once these all of these , mundane but real problems get sorted out , we can just start turning the crank and push all of us through , and then finally figure out what 's best . grad c: i was thinking two things . , the first thing was , we actually had thought of this as like , not in stages , but more along the time axis . just like one stream at a time , je - je check out the results and go that way . professor f: no , i ' m just saying , i ' m just thinking of it like loops , and so , y if you had three nested loops , that you have a choice for this , and a choice for that , and you 're going through them all . that that 's what i meant . professor f: and , that once you get a better handle on how much you can realistically do , , concurrently on different machines , different sperts , and you see how long it takes on what machine and , you can stand back from it and say , " ok , if we look these combinations we 're talking about , and combinations of combinations , and , " you 'll probably find you ca n't do it all . so then at that point , we should sort out which ones do we throw away . which of the combinations across , what are the most likely ones , i still think we could do a lot of them . , it would n't surprise me if we could do a hundred of them . but , probably when you include all the combinations , you 're actually talking about a thousand of them , and that 's probably more than we can do . but a hundred is a lot . and and , grad c: and the second thing was about scratch space . and you sent an email about , e scratch space for people to work on . and i know that , stephane 's working from an nt machine , so his home directory exists somewhere else . professor f: his his is somewhere else , my point i want to for bring it back to that . my th i want to clarify my point about that chuck repeated in his note . we 're over the next year or two , we 're gon na be upgrading the networks in this place , but right now they 're still all te all ten megabit lines . and we have reached the this the machines are getting faster and faster . so , it actually has reached the point where it 's a significant drag on the time for something to move the data from one place to another . so , you do n't w especially in something with repetitive computation where you 're going over it multiple times , you do do n't want to have the data that you 're working on distant from where it 's being where the computation 's being done if you can help it . now , we are getting more disk for the central file server , which , since it 's not a computational server , would seem to be a contradiction to what said . but the idea is that , suppose you 're working with , this big bunch of multi multilingual databases . you put them all in the central ser at the cen central file server . then , when you 're working with something and accessing it many times , you copy the piece of it that you 're working with over to some place that 's close to where the computation is and then do all the work there . and then that way you wo n't have the network you wo n't be clogging the network for yourself and others . that 's the idea . so , it 's gon na take us it may be too late for this , p precise crunch we 're in now , but , we 're , it 's gon na take us a couple weeks at least to get the , the amount of disk we 're gon na be getting . we 're actually gon na get , four more , thirty - six gigabyte drives and , put them on another disk rack . we ran out of space on the disk rack that we had , so we 're getting another disk rack and four more drives to share between , primarily between this project and the meetings project . we ' ve put another i there 's another eighteen gigabytes that 's in there now to help us with the immediate crunch . are you saying so i where you 're stephane , where you 're doing your computations . if i so , you 're on an nt machine , so you 're using some external machine professor f: are these , computational servers , ? i ' m i ' ve been out of it . professor f: unfortunately , these days my idea of running comput of computa doing computation is running a spread sheet . have n't been doing much computing personally , so those are computational servers . so i the other question is what disk there i space there is there on the computational servers . phd a: , i ' m not what 's available on is it you said nutmeg and what was the other one ? professor f: , so , chuck will be the one who will be sorting out what disk needs to be where , and so on , and i 'll be the one who says , " ok , spend the money . " so . which , n these days , if you 're talking about scratch space , it does n't increase the , need for backup , and , it 's not that big a d and the disks themselves are not that expensive . right now it 's phd a: what you can do , when you 're on that machine , is , just go to the slash - scratch directory , and do a df minus k , and it 'll tell you if there 's space available . , and if there is then , professor f: but was n't it , dave was saying that he preferred that people did n't put in slash - scratch . it 's more putting in d s xa or xb or , phd a: , there 's different there , there 's so there 's the slash - x - whatever disks , and then there 's slash - scratch . and both of those two kinds are not backed up . and if it 's called " slash - scratch " , it means it 's probably an internal disk to the machine . and so that 's the thing where , like if , ok , if you do n't have an nt , but you have a unix workstation , and they attach an external disk , it 'll be called " slash - x - something " , if it 's not backed up and it 'll be " slash - d - something " if it is backed up . and if it 's inside the machine on the desk , it 's called " slash - scratch " . but the problem is , if you ever get a new machine , they take your machine away . it 's easy to unhook the external disks , put them back on the new machine , but then your slash - scratch is gone . so , you do n't wanna put anything in slash - scratch that you wanna keep around for a long period of time . but if it 's a copy of , say , some data that 's on a server , you can put it on slash - scratch because , first of all it 's not backed up , and second it does n't matter if that machine disappears and you get a new machine because you just recopy it to slash - scratch . so tha that 's why i was saying you could check slash - scratch on those on , mustard and nutmeg to see if there 's space that you could use there . you could also use slash - x - whatever disks on mustard and nutmeg . and we do have so you , it 's better to have things local if you 're gon na run over them lots of times so you do n't have to go to the network . professor f: right , so es so especially if you 're right , if you 're taking some piece of the training corpus , which usually resides in where chuck is putting it all on the , file server , then , it 's fine if it 's not backed up because if it g gets wiped out , y it is backed up on the other disk . phd a: , so , one of the things that i need to i ' ve started looking at , is this the appropriate time to talk about the disk space ? i ' ve started looking at , disk space . dan david , put a new , drive onto abbott , that 's an x disk , which means it 's not backed up . so , i ' ve been going through and copying data that is , some corpus usually , that we ' ve got on a cd - rom , onto that new disk to free up space on other disks . and , so far , i ' ve copied a couple of carmen 's , databases over there . we have n't deleted them off of the slash - dc disk that they 're on right now in abbott , but we i would like to go through sit down with you about some of these other ones and see if we can move them onto , this new disk also . there 's there 's a lot more space there , phd a: and it 'll free up more space for doing the experiments and things . so , anything that you do n't need backed up , we can put on this new disk . , but if it 's experiments and you 're creating files and things that you 're gon na need , you probably wanna have those on a disk that 's backed up , just in case something goes wrong . so far i ' ve copied a couple of things , but i have n't deleted anything off of the old disk to make room yet . and i have n't looked at the any of the aurora , except for the spanish . so i 'll need to get together with you and see what data we can move onto the new disk . professor f: an another question occurred to me is what were you folks planning to do about normalization ? phd g: so , so that this could be another dimension , but we think perhaps we can use the best , , normalization scheme as ogi is using , so , with parameters that they use there , professor f: it 's i we seem to have enough dimensions as it is . so probably if we take their probably the on - line normalization because then it 's if we do anything else , we 're gon na end up having to do on - line normalization too , so we may as just do on - line normalization . so that it 's plausible for the final thing . good . so , i , th the other topic i maybe we 're already there , or almost there , is goals for the for next week 's meeting . i it seems to me that we wanna do is flush out what you put on the board here . , maybe , have it be somewhat visual , a little bit . professor f: so w we can say what we 're doing , and , also , if you have sorted out , this information about how long i roughly how long it takes to do on what and , what we can how many of these trainings , and testings and that we can realistically do , then one of the big goals of going there next week would be to actually settle on which of them we 're gon na do . and , when we come back we can charge in and do it . anything else that i a actually started out this field trip started off with , stephane talking to hynek , so you may have had other goals , for going up , and any anything else you can think of would be we should think about accomplishing ? , i ' m just saying this because maybe there 's things we need to do in preparation . professor f: and and the other the last topic i had here was , d dave 's fine offer to , do something on this . he 's doing he 's working on other things , but to do something on this project . so the question is , " where where could we , most use dave 's help ? " phd g: i was thinking perhaps if , additionally to all these experiments , which is not really research , it 's , running programs and , trying to have a closer look at the perhaps the , speech , noise detection or , voiced - sound - unvoiced - sound detection and which could be important in i for noise phd a: that would be a that 's a big deal . because the , the thing that sunil was talking about , with the labels , labeling the database when it got to the noisy ? the that that really throws things off . , having the noise all of a sudden , your , speech detector , the , what was it ? what was happening with his thing ? he was running through these models very quickly . he was getting lots of , insertions , is what it was , in his recognitions . professor f: the only problem , maybe that 's the right thing the only problem i have with it is exactly the same reason why you thought it 'd be a good thing to do . , that let 's fall back to that . but the first responsibility is to figure out if there 's something that , an additional that 's a good thing you remove the mike . go ahead , good . what an additional clever person could help with when we 're really in a crunch for time . cuz dave 's gon na be around for a long time , he 's he 's gon na be here for years . and so , over years , if he 's interested in , voiced - unvoiced - silence , he could do a lot . but if there 's something else that he could be doing , that would help us when we 're strapped for time we have we ' ve , only , another month or two to , with the holidays in the middle of it , to get a lot done . if we can think of something some piece of this that 's going to be the very fact that it is just work , and i and it 's running programs and , is exactly why it 's possible that it some piece of could be handed to someone to do , because it 's not so that 's the question . and we do n't have to solve it right this s second , but if we could think of some piece that 's defined , that he could help with , he 's expressing a will willingness to do that . phd e: yes , maybe to , mmm , put together the label the labels between timit and spanish like that . professor f: that 's something that needs to be done in any event . so what we were just saying is that , i was arguing for , if possible , coming up with something that really was development and was n't research because we 're we have a time crunch . and so , if there 's something that would save some time that someone else could do on some other piece , then we should think of that first . see the thing with voiced - unvoiced - silence is i really think that it 's to do a poor job is pretty quick , or , a so - so job . you can you can throw in a couple fea we kinds of features help with it . you can throw something in . you can do pretty . but i remember , when you were working on that , and you worked on for few months , as i recall , and you got to , say ninety - three percent , and getting to ninety - four really hard . professor f: and th the other tricky thing is , since we are , even though we 're not we do n't have a strict prohibition on memory size , and computational complexity , clearly there 's some limitation to it . so if we have to if we say we have to have a pitch detector , say , if we 're trying to incorporate pitch information , or at least some harmonic harmonicity , this is another whole thing , take a while to develop . anyway , it 's a very interesting topic . , one of the a lot of people would say , and dan would also , that one of the things wrong with current speech recognition is that we really do throw away all the harmonicity information . , we try to get spectral envelopes . reason for doing that is that most of the information about the phonetic identity is in the spectral envelopes are not in the harmonic detail . but the harmonic detail does tell you something . like the fact that there is harmonic detail is real important . so , . so wh that so the other suggestion that just came up was , what about having him work on the , multilingual super f superset thing . , coming up with that and then , training it training a net on that , say , from timit . is that or , for multiple databases . what what would you think it would wh what would this task consist of ? phd g: , it would consist in , , creating the superset , and , modifying the lab labels for matching the superset . professor f: and then creating i m changing labels on timit ? or on or on multiple language multiple languages ? grad c: there 's , carmen was talking about this sampa thing , and it 's , it 's an effort by linguists to come up with , a machine readable ipa , thing , and , they have a web site that stephane was showing us that has , has all the english phonemes and their sampa correspondent , phoneme , and then , they have spanish , they have german , they have all sorts of languages , mapping to the sampa phonemes , which phd e: the tr the transcription , though , for albayzin is n the transcription are of sampa the same , how you say , symbol that sampa appear . professor f: what , has ogi done anything about this issue ? do they have any superset that they already have ? phd g: i do n't . , they 're going actually the other way , defining , phoneme clusters , . professor f: aha . that 's right . , and that 's an interesting way to go too . phd a: so they just throw the speech from all different languages together , then cluster it into sixty or fifty or whatever clusters ? phd g: they ' ve not done it , doing , multiple language yet , but what they did is to training , english nets with all the phonemes , and then training it in english nets with , seventeen , it was seventeen , broad classes . phd g: , so . and , . and the result was that , when testing on cross - language it was better . but hynek did n't add did n't have all the results when he showed me that , so , . professor f: is there 's some way that we should tie into that with this . , if that is a better thing to do , should we leverage that , rather than doing , our own . so , if i if they s , we have i we have the trainings with our own categories . and now we 're saying , " , how do we handle cross - language ? " and one way is to come up with a superset , but they are als they 're trying coming up with clustered , and do we think there 's something wrong with that ? phd g: or , because , for the moment we are testing on digits , and e i perhaps u using broad phoneme classes , it 's ok for , classifying the digits , but as soon as you will have more words , words can differ with only a single phoneme , and which could be the same , class . phd g: , but you will ask the net to put one for th the phoneme class and so . phd a: so you 're saying that there may not be enough information coming out of the net to help you discriminate the words ? phd b: fact , most confusions are within the phone classes , right ? , larry was saying like obstruents are only confused with other obstruents , et cetera . professor f: instead of the superclass thing , which is to take so suppose y you do n't really mark arti to really mark articulatory features , you really wanna look at the acoustics and see where everything is , and we 're not gon na do that . , the second class way of doing it is to look at the , phones that are labeled and translate them into acoustic , articulatory , features . so it wo n't really be right . you wo n't really have these overlapping things and , professor f: you either do that or you have multiple nets . and , i if our software this if the qu versions of the quicknet that we 're using allows for that . do ? professor f: so that 'll work , that 's another thing that could be done is that we could , just translate instead of translating to a superset , just translate to articulatory features , some set of articulatory features and train with that . now the fact even though it 's a smaller number , it 's still fine because you have the , combinations . so , it has every , it had has every distinction in it that you would have the other way . but it should go across languages better . phd a: we could do an interesting cheating experiment with that too . we could i , if you had the phone labels , you could replace them by their articulatory features and then feed in a vector with those , things turned on based on what they 're supposed to be for each phone to see if it if you get a big win . do what i ' m saying ? phd a: , if your net is gon na be outputting , a vector of , it 's gon na have probabilities , but let 's say that they were ones and zeros , then y and for each , i if this for your testing data , but if for your test data , what the string of phones is and you have them aligned , then you can just instead of going through the net , just create the vector for each phone and feed that in to see if that data helps . , what made me think about this is , i was talking with hynek and he said that there was a guy at a t - andt who spent eighteen months working on a single feature . and because they had done some cheating experiments professor f: this was the guy that we were just talking a that we saw on campus . so , this was larry saul who did this . he used sonorants . phd a: , hynek said that , i before they had him work on this , they had done some experiment where if they could get that one feature right , it dramatically improved the result . phd a: so i was thinking , it made me think about this , that if it 'd be an interesting experiment just to see , if you did get all of those right . professor f: should be . because if you get all of them in there , that defines all of the phones . so that 's equivalent to saying that you ' ve got all the phones right . so , if that does n't help , there 's although , it would be make an interesting cheating experiment because we are using it in this funny way , where we 're converting it into features . phd a: and then you also what error they ' ve got on the htk side . ? it gives you your the best you could hope for , . phd b: the soft training of the nets still requires the vector to sum to one , though , phd b: so you ca n't really feed it , like , two articulatory features that are on at the same time with ones cuz it 'll normalize them down to one half like that , . phd b: is n't that what you 'll want ? if you 're gon na do a kl transform on it . ###summary: the main topics discussed were arrangements and objectives of an upcoming field trip to visit research partners ogi; a number of members reported their progress to date; if there are any tasks that one member can help others with; an overall description of the cube project , a multi-lingual speech recognition system for use by the cellular phone industry , along with consideration of some of the issues therein , specifically disk and resource issues. essentially the cube consists of three dimension: input features; training corpus; and test corpus. most important concerns are which combinations of features to use , and what combinations of languages and broad/specific corpora to use for the training the group will meet at the building at 6am to go to the airport for their field trip together. speaker me018 needs to discuss files that can be moved with speaker mn007. for the ogi meeting they need to take a clear description of the cube project , and an estimate of how long the entire process should take. at the meeting they should discuss what they will ultimately put through the system. people are to consider what me034 could do on the project to speed things up , though creating the phoneme superset is a possibility. speaker me018 is to look into the machines that mn007 has been running data on to find out what they are. rather than consider level of normalization as a further dimension to the project , whatever ogi finds the best will be used systematically. need to use multiple machines and spert boards to run processes on because they take so long. they will consider looking at articulatory features rather than straight phonemes , though it wouldn't be perfect. it is not clear what combinations of dimensions , which features, should be run in the cube project. it is important to know because the processes are going to be large and processor and memory hungry. to bear in mind is the fact that the cellular industry has an image of speech recognition in that's what they are after. must be careful if using a broad training source that is carefully hand marked , because it would be unclear which is the reason for improvement. memory is of concern , because final product needs to run potentially one cheaper cell phones , which have limited memory capacity. ogi doesn't have a phoneme superset ready prepared , for they are working with clusters , which may be good enough for digits , but not for discriminating words. speaker mn007 has been preparing the french digit database. training and testing with varying noise. speaker me006 has installed updated software for everyone. working on label files from timit for training neural nets. trying to figure out what the input to the cube should be. speaker fn002 has been testing the italian database on a net trained on spanish. she has had problems with incompatible labels though. within the next year , the network is to be upgraded , and in a couple of weeks , the group should have access to 4 new 36 gigabyte file servers. me018 has been copying some corpus stuff to a non-backed up system , but not yet deleted originals. current plan is to use a superset of phones for the cube project derived from the various training languages. htk training currently takes 6 hours to a day , and the neural net takes 1-2 days.
22
grad a: ok , we 're on . so , this is gon na be a pretty short meeting because i have four agenda items , three of them were requested by jane who is not gon na be at the meeting today . so . the first was transcription status . does anyone besides jane the transcription status is ? phd f: first of all with ibm i got a note from brian yesterday saying that they finally made the tape for the thing that we sent them a week or week and a half ago phd f: and that it 's gone out to the transcribers and hopefully next week we 'll have the transcription back from that . phd f: jane seems to be moving right along on the transcriptions from the icsi side . she 's assigned , probably five or six m more meetings . phd f: and we ' ve run out of e d us because a certain number of them are , awaiting to go to ibm . phd d: so does she have transcribers right now who are sitting idle because there 's no data back from ibm phd d: because i need to ask jane whether it 's it would be ok for her , s some of her people to transcribe some of the initial data we got from the smartkom data collection , which is these short like five or seven minute sessions . phd d: and we want it , we need the again , we have a similar logistic set - up where we are supposed to send the data to munich phd d: and get it transcribed and get it back . but to get going we would like some of the data transcribed right away so we can get started . phd d: and so i wanted to ask jane if , maybe one of their transcribers could do since these are very short , that should really be , phd c: there 's only two channels . so it 's only as the synthesis does n't have to be transcribed . phd d: so it 's one channel to transcribe . and it 's one session is only like seven professor b: so that should have ma many fewer and it 's also not a bunch of interruptions with people and all that , phd d: right . and some of it is read speech , so we could give them the thing that they 're reading phd d: and so , i since she 's i was gon na ask her but since she 's not around i maybe i 'll phd d: if that 's ok with you to , get that to ask her for that , then i 'll do that . professor b: , if we 're held up on this other a little bit in order to encompass that , that 's ok because i , i still have high hopes that the ibm pipeline 'll work out for us , so it 's phd f: , and also related to the transcription , so i ' ve been trying to keep a web page up to date f showing what the current status is of the trans of all the things we ' ve collected and what stage each meeting is in , in terms of whether it 's phd f: - , i will . i that 's the thing that i sent out just to foo people saying can you update these pages phd f: and so that 's where i ' m putting it but i 'll send it out to the list telling people to look at it . grad a: , i have n't done that . so . i have lots of to add that 's just in my own directory . i 'll try to get to that . ok . so jane also wanted to talk about participant approval , but i do n't really think there 's much to talk about . i ' m just gon na do it . and , if anyone objects too much then they can do it instead . grad a: i ' m gon na send out to the participants , with links to web pages which contain the transcripts and allow them to suggest edits . and then bleep them out . phd f: so , the audio that they 're gon na have access to , will that be the uncompressed version ? or will you have scripts that like uncompress the various pieces and grad a: , that 's a good point . that 's a good point . , it 's probably going to have to be the uncompressed versions because , it takes too long to do random access decompression . phd f: , i was just wondering because we 're running out of the un - backed - up disk space on grad a: so , but that is a good point so we 'll get to that , too . , darpa demo status , not much to say . the back - end is working out fine . it 's more or less ready to go . i ' ve added some that indes indexes by the meeting type mr , edu , et cetera and also by the user id . so that the front - end can then do filtering based on that as . the back - end is , going more slowly as i s i said before just cuz i ' m not much of a tcl - tk programmer . and dave gelbart says he 's a little too busy . so don and i are gon na work on that and you and just talk about it off - line more . grad a: but the back - end was pretty smooth . so , we 'll have something . it may not be as as pretty as we might like , but we 'll have something . professor b: i wondered whe when we would reach dave 's saturation point . he 's been volunteering for everything and grad a: , he actually he volunteered but then he s then he retracted it . so . grad e: and , also , i was just showing andreas , i got an x waves display , and i how much more we can do with it with like the prosodic where we have like stylized pitches and signals and the transcripts on the bottom grad e: so , right now it 's just an x waves and then you have three windows but i , it looked pretty and i ' m it think it has potential for a little something , professor b: ok , so again , the issue is for july , the issue 's gon na be what can we fit into a windows machine , and so on , but phd c: i ' ve been putting together transcriber things for windows so i and i installed it on dave gelbart 's pc and it worked just fine . so hopefully that will work . phd d: really ? so is that because there 's some people it would be if we could get that to work at sri because the phd c: but but the problem is the version transcriber works with , the snack version , is one point six whatever and that 's not anymore supported . it 's not on the web page anymore . but wrote an email to the author of to the snack author and he sent me to one point six whatever library phd c: and so it works . , but then you ca n't add our patches and then the new version is different a and in , in terms of the source code . you you ca n't find the tcl files anymore . it 's some whatever wrapped thing and you ca n't access that so you have to install first install tcl then install snack and then install the transcriber thing and then do the patches . phd d: i wonder if we should contribute our changes back to the authors so that they maintain those changes along phd c: no , i have n't done that yet . i ' m nope . but i definitely will do that . professor b: so , can some of the that don 's talking about somehow fit into this , mean you just have a set of numbers that are associated with the grad e: , it 's ascii files or binary files , whatever representation . just three different it 's a waveform and just a stylized pitch vector so it 's grad e: we could do it in matl - you could do it in a number of different places i ' m . phd d: but it would be if the transcriber interface had like another window for the , maybe above the waveform where it would show some arbitrary valued function that is time synchron ti time synchronous with the wavform . grad a: it 'd be easy enough to add that . again it 's it 's more tcl - t so someone who 's familiar with tcl - tk has to do it , but , it would n't be hard to do . grad a: it does n't seem like having that real time is that necessary . so yo it seems to me you could do images . grad e: it would be to see it it would be like to see to hear it and see it , grad e: right , right . thought if you meant slides you meant like just like view graphs . professor b: , wh so . , no , we 're talking about on the computer and , when we were talking about this before we had littl this little demo meeting , we set up a range of different degrees of liveness that you could have and , the more live , the better , but , given the crunch of time , we may have to retreat from it to some extent . so for a lot of reasons , it would be very to have this transcriber interface be able to show some other interesting signal along with it so it 'd be a good thing to get in there . but , anyway , jus just looking for ways that we could actually show what you 're doing , in to people . cuz a lot of this , particularly for communicator , certainly a significant chunk of the things that we waved our arms about th originally had t had to do with prosodics it 'd be to show that we can actually get them and see them . grad a: and the last i item on the agenda is disk issues yet again . so , we 're doing ok on backed up . we 're only about thirty percent on the second disk . so , we have a little bit of time before that becomes critical , but we are like ninety five percent , ninety eight percent on the scratch disks for the expanded meetings . and , my original intention was like we would just delete them as we needed more space , but unfortunately we 're in the position where we have to deal with all the meeting data all at once , in a lot of different ways . grad a: , there 're a lot of transcribers , so all of those need to be expanded , and then people are doing chunking and i want to do , the permission forms , grad a: so i want those to be live , so there 's a lot of data that has to be around . and jane was gon na talk to , dave johnson about it . one of the things i was thinking is we just got these hundred alright , excuse me ten , sparc - blade sun - blades . grad a: they came in but they 're not set up yet . and so it seems to me we could hang scratch disk on those because they 'll be in the machine room , they 'll be on the fast connection to the rest of the machines . and if we just need un - backed - up space , we could just hang disks off them . grad a: , but the sun - blades have spare drive bays . just put them in . grad a: cuz the sun , these sun - blades take commodity hard drives . so you can just go out and buy a pc hard drive and stick it in . professor b: but if abbott is going to be our disk server it file server it seems like we would want to get it , a second disk rack . grad a: , there are lots of long term solutions . what i ' m looking for is where do we s expand the next meeting ? professor b: , for the next meeting you might be out of luck with those ten , might n't you ? , dave johnson is gone for , like , ten days , grad e: i ' m not doing anything on it right now until i get new meetings to transcri or that are new transcriptions coming in i really ca n't do anything . not that i ca n't do anything , i jus phd f: i jus gave thilo some about ten gigs , the last ten gigs of space that there was on abbott . and so but that but phd d: xg ? that 's also where we store the the hub - five training set waveforms , phd f: i do n't think that 's on xg . on xg is only carmen and du - and stephane 's disk . phd d: but i ' ve also been storing i ' ve been storing the feature files there and i s start deleting some because we now the best features are and we wo n't be using the old ones anymore . grad e: i have a lot of space and it 's not it 's n there 's very little not for long . grad a: , it 's probably probably only about four gig is on x on your x drive , professor b: there should i d there should just be a b i should have a button . professor b: just press press each meeting saying " we need more disk space " this week " . skip the rest of the conversation . phd c: it 's a little bit more as i usually do n't do not uncompress the all of the pzm and the pda things . phd f: is a little more ? right , so if you uncompressed everything it 's even more . phd f: about half ? so we 're definitely are storing , all of those . so there 's what thirty some gig of just meetings so far ? professor b: so - so so maybe there 's a hundred gig . or . cuz we have the uncompressed around also . so it 's like professor b: it 's the they really are cheap . it 's just a question of figuring out where they should be and hanging them , but but , we could , if you want to get four disks , get four disks . it 's small these things ar are just a few hundred dollars . phd f: i sent that message out to , i , you and dave asking for if we could get some disk . i s i sent this out a day ago phd f: but and dave did n't respond so i how the whole process works . does he just go out and get them and if it 's ok , and so i was assuming he was gon na take over that . but he 's probably too busy given that he 's leaving . professor b: , you need a direct conversation with him . and just say an - e just ask him that , wha what should you do . and in my answer back was " are you just want one ? " so that what you want to do is plan ahead a little bit and figure " , here 's what we pi figure on doing for the next few months " . grad a: wa - a i they want . the sysadmins would prefer to have one external drive per machine . so they do n't want to stack up external drives . and then they want everything else in the machine room . so the question is where are you gon na hang them ? professor b: so this is a question that 's pretty hard to solve without talking to dave , phd f: part of the reason why dave ca n't get the new machines up is because he does n't have room in the machine room right now . phd d: one on - one thing to in to t to do when you need to conserve space is phd d: i bet there are still some old , like , nine gig disks , around and you can probably consolidate them onto larger disks and recover the space . professor b: no . dave knows all these things , . an - and so , he always has a lot of plans of things that he 's gon na do to make things better in many ways an and runs out of time . grad a: but i know that generally their first priority has been for backed up disk . and so what he 's been concentrating on is the back up system , rather than on new disk . which professor b: but this is a very specific question for me . , we can easily get one to four disks , you just go out and get four and we ' ve got the money for it , it 's no big deal . , but the question is where they go , and i do n't think we can solve that here , you just have to ask him . phd d: to the machine that collects the data . so then you could , at least temporarily , store there . grad a: , it 's just it 's not on the net , so it 's a little awkward grad a: it 's behind lots of fire walls that do n't allow any services through except s grad a: and also on the list is to get it into the normal icsi net , but who knows when that will happen ? grad a: no , the problem with that is that they do n't currently have a wire running to that back room that goes anywhere near one of the icsi routers . professor b: , e again , any one of these things is certainly not a big deal . if there was a person dedicated to doing it they would happen pretty easily but it 's jus every ever everybody has a has professor b: all of us have long lists of different things we 're doing . but at any rate that there 's a longer term thing and there 's immediate need and we need a conversation with , maybe after tea you and go down and talk to him about it just say " wha , what should we do right now ? " professor b: let 's see . the only oth thing other thing i was gon na add was that , i talked briefly to mari and we had both been busy with other things so we have n't really connected that much since the last meeting we had here but we that we would have a telephone meeting the friday after next . and i wanted to make it , after the next one of these meetings , so something that we wanna do next meeting is to put together , a reasonable list for ourselves of what is it , that we ' ve done . just bulletize o e do i can dream up text but this is gon na lead to the annual report . so if w phd d: is this got ta be in the morning ? or because i fridays i have to leave like around two . so if it could be before that would be professor b: no , no but i do n't need other folks for the meeting . do it . a all i ' m saying is that on professor b: so what i meant was on the me this meeting if i wa something i ' m making a major thing in the agenda is i wanna help in getting together a list of what it is that we ' ve done so tell her . professor b: i have a pretty good idea , and then the next day , late in the day i 'll be having that discussion with her . so . phd d: one thing we in past meetings we had also a various variously talked about the work that w was happening on the recognition side but is n't necessarily related to meetings specifically . so . and i wondered whether we should maybe have a separate meeting and between , whoever 's interested in that because i feel that there 's plenty of to talk about but it would be maybe the wrong place to do it in this meeting if , it 's that it 's just gon na be ver very boring for people who are not , really interested in the details of the recognition system . professor b: , ok , so how many people here would not be interested in a meeting about recognition ? phd d: i know , jane an you mean in a separate meeting or ha talking about it in this phd d: it 's when the talk is about data collection , sometimes i ' ve , i ' m bored . phd d: so it 's i c sympathize with them not wanting to i to be if i cou this could professor b: it 's cuz y you have a so you need a better developed feminine side . professor b: there 's probably gon na be a lot of " bleeps " in this meeting . professor b: it must be nearing the end of the week . i , i ' ve heard some comments about like this . that m could be . the . u phd d: we could do it every other week or so . , whatev or whenever we feel like we phd d: we could do that , . i personally i 'd i ' m not in favor of more meetings . because , . phd f: but i do i do n't a lot of times lately it seems like we do n't really have enough for a full meeting on meeting recorder . professor b: and then if we find , we 're just not getting enough done , there 's all these topics not coming up , then we can expand into another meeting . but that 's a great idea . so . let 's chat about it with liz and jane when we get a chance , see what they think and phd f: that would be good . andreas and i have various talks in the halls and there 's lots of things , details and that would people 'd be interested in and i 'd , where do we go from here things and so , it would be good . professor b: , and you 're attending the front - end meeting as the others so you have probably one of the best you and i , i are the main ones who see the bridge between the two . phd d: so . so so we could talk a little bit about that now if there 's some time . phd d: i jus so the latest result was that yot i tested the final version of the plp configuration on development test data for this year 's hub - five test set . and the recognition performance was exactly , and exactly up to the , the first decimal , same as with the mel cepstra front - end . phd d: i overall . they were the males were slightly better and the females were slightly worse but nothing really . definitely not significant . and then the really thing was that if we combine the two systems we get a one and a half percent improvement . phd d: t with n - best rover , which is like our new and improved version of rover . phd d: which u actually uses the whole n - best list from both systems to mmm , c combine that . professor b: so except the only key difference between the two really is the smoothing at the end which is the auto - regressive versus the cepstral truncation . phd d: and so after i told the my colleagues at sri about that , now they definitely want to , , have a next time we have an evaluation they want to do , a at least the system combination . , and , why not ? phd d: w what do you mean ? more features in the sense of front - end features or in the sense of just bells and whistles ? grad a: no , front - end features . we did plp and mel cepstra . let 's , try rasta and msg , and phd d: so , we cou that 's the there 's one thing you do n't want to overdo it because y every front - end , if you multiply your effort by n , where n is a number of different systems and . so one compromise would be to only to have the everything up to the point where you generate lattices be one system and then after that you rescore your lattices with the multiple systems and combine the results and that 's a fairly painless thing . phd d: so . maybe a little less because at that point the error rates are lower and so if , maybe it 's only one percent but that would still be worthwhile doing . jus - , just wanted to let that 's working out very nicely . and then we had some results on digits , with we so this was really just to get dave going with his experiments . and so , . but as a result , , we were wondering why is the hub - five system doing so on the digits . and the reason is there 's a whole bunch of read speech data in the hub - five training set . phd d: and you c and not all of no it 's actually , digits is only a maybe a fifth of it . phd d: the rest is read timit data and atis data and wall street journal and like that . professor b: , so that 's actually not that different from the amount of training that there was . phd d: but it definitely helps to have the other read data in there because we 're doing phd d: the error rate is half of what you do if you train only on ti timit not timit ti - digits , which is only what two hours something ? phd d: so . , more read speech data definitely helps . and you can leave out all the conversational data with no performance penalty . professor b: because because , it was apparent if you put in a bunch more data it would be better , phd d: so we only for the hub - five training , we 're only using a fairly small subset of the macrophone database . , so , you could beef that up and probably do even better . grad a: i could also put in focus condition zero from hub - four from broadcast news , which is mostly prepared speech . it 's not exactly read speech but it 's pretty darn close . phd d: , that 's plenty of read speech data . , wall street journal , take one example . phd d: but . so , that might be useful for the people who train the digit recognizers to use something other than ti - digits . professor b: they been using timit . that . they they experimented for a while with a bunch of different databases with french and spanish and cuz they 're multilingual tests and , and actually the best results they got wa were using timit . but which so that 's what they 're using now . but but certainly if we , if we knew what the structure of what we 're doing there was . there 's still a bunch of messing around with different kinds of noise robustness algorithms . so we exactly which combination we 're gon na be going with . once we know , then the trainable parts of it 'd be great to run lots of through . phd d: , that was that . and then i th chuck and i had some discussions about how to proceed with the tandem system and you wanna see where that stands ? phd f: , i ' m , so andreas brought over the alignments that the sri system uses . and so i ' m in the process of converting those alignments into label files that we can use to train a new net with . and so then i 'll train the net . and . phd d: an - and one side effect of that would be that it 's that the phone set would change . so the mlp would be trained on only forty - six or forty - eight phd d: forty - eight phones ? which is smaller than the phone set that we ' ve been using so far . and that will probably help , actually , phd d: because the fewer dimensions e the less trouble probably with the as far as just the , just we want to try things like deltas on the tandem features . and so you h have to multiply everything by two or three . and so , fewer dimensions in the phone set would be actually helpful just from a logistics point of view . professor b: although we , it 's not that many fewer and we take a klt anyway so we could phd d: exactly . so so that was the other thing . and then we wanted to s just limit it to maybe something on the same order of dimensions as we use in a standard front - end . so that would mean just doing the top i ten or twelve of the klt dimensions . professor b: , and we sh again check we should check with stephane . my impression was that when we did that before that had very little he did n't lose very much . phd d: but then and then something once we have the new m l p trained up , one thing i wanted to try just for the fun of it was to actually run like a standard hybrid system that is based on , those features and retrain mlp and also the , the dictionary that we use for the hub - five system . professor b: and the b and the base u starting off with the base of the alignments that you got from i from a pretty decent system . phd d: so that would give us a , more hopefully a better system because , compared to what eric did a while ago , where he trained up , a system based on broadcast news and then tra retraining it on switchboard or s and but he d he did n't he probably did n't use all the training data that was available . and his dictionary probably was n't as tuned to conversational speech as the as ours is . phd d: and the dictionary made a huge difference . we we made some improvements to the dictionary 's to the dictionary about two years ago which resulted in a something like a four percent absolute error rate reduction on switchboard , which professor b: the other thing is , dipping deep into history and into our resource management days , when we were collaborating with sri before , it was , it is was a really key starting point for us that we actually got our alignment . when we were working together we got our initial alignments from decipher , at the time . and . later we got away from it because once we had decent systems going then it was typically better to use our own systems cuz they were self consistent but certainly to start off when we were trying to recover from our initial hundred and forty percent error rate . but that was a good way to start . and we 're not quite that bad with our switchboard systems but it was they certainly are n't as good as sri 's , so phd f: w what is the performance on s the best switchboard system that we ' ve done ? roughly ? professor b: , the hybrid system we never got better than about fifty percent error . and it was there 's just a whole lot of things that no one ever had time for . we never did really fix up the dictionary . we always had a list of a half dozen things that we were gon na do and a lot of them were pretty simple and we never did . , we never did an never did any adaptation phd d: and that number was on switchboard - one data , where the error rate now is in the twenties . so , . phd d: that 's yet s so it would be good t to r re just at least to give us an idea of how the hybrid system would do . professor b: but again it 's it 's the conver it 's the s conversational speech bit . because our broadcast news system is actually pretty good . he knows . phd d: and the other thing that would help us to evaluate is to see how the m l p is trained up . right ? because it 's a pretty good indicator of that . so it 's a sanity check of the m l p outputs before we go ahead and train up the , use them as a basis for the tandem system . professor b: it 'll still probably be worse . , it 's it 'd be context independent and so on . phd d: , we were n't whether it 's worth to just use the alignments from the s r i recognizer or whether to actually go through one or more iterations of embedded training where you realign . grad a: try it . you run it ? keep keep both versions ? see which one 's better ? professor b: , . i agree with ad you would then you proceed with the embedded training . it 's gon na take you a while to train at this net anyway . and while it 's training you may as test the one you have and see how it did . phd d: but i but in your experience have you seen big improvements in s on some tasks with embedded training ? or was it small - ish improvements that you got professor b: . it depended on the task . in this one i would expect it to be important because we 're coming from , alignments that were achieved with an extremely different system . grad a: although , we ' ve done it with when we were combining with the cambridge recurrent neural net , embedded training made it worse . which i ' ve never figured out . phd d: so you started training with outputs from a with alignments that were generated by the cambridge system ? and then . , that might probably just that was probably because your initial system your system was ba worse than cambridge 's . and you . phd d: no it 's weird that it did i ' m . it 's w it 's weird that it got worse . professor b: . no . tha - u we ' ve see and wi with the numbers ogi numbers task we ' ve seen a number of times people doing embedded trainings and things not getting better . phd d: actually it 's not that weird because we have seen we have seen cases where acoustic retraining the acoustic models after some other change made matters worse rather than better . professor b: it just but i would suspect that something that had a very different feature set , they were using pretty diff similar feature sets to us . i would expect that something that had a different feature set would benefit from professor b: , a minute , and the other thing , it was the other thing is that what was in common to the cambridge system and our system is they both were training posteriors . so , that 's another pretty big difference phd d: you mean with soft targets ? or ? , i ' m sor i missed what 's the key issue here ? professor b: , that both the cambridge system and our system were training posteriors . and if we 're coming from alignments coming from the sri system , it 's a likelihood - based system . so so that 's another difference . , there 's diffe different front - end different , training criterion , i would think that in a that an embedded training would have at least a good shot of improving it some more . but we . you gon na say something ? phd d: , that has about i you 'd would be gender - dependent training , so it 's that 's about mmm , something like thirty hours . professor b: at least a couple thousand hidden units . it 's it 's th the thing i 'll think about it a little more but it 'd be toss up between two thousand and four thousand . you definitely would n't want the eight thousand . it 's m it 's more than professor b: let me think about it , but that th at some point there 's diminishing returns . it does n't actually get worse , typically , but it but there is diminishing returns and you 're doubling the amount of time . phd d: remember you 'll have a smaller output layer so there 's gon na be fewer parameters there . grad a: right , because you used the context windows and so the input to hidden is much , much larger . professor b: , so it 's it 'd be way , way less than ten percent of the difference . there 's how bi how big what am i trying to think of ? phd f: the net that we did use already was eight thousand hidden units and that 's the one that eric trained up . professor b: so that would be like trained on s sixty or seventy hours . so , definitely not the one thousand two thousand fr the four thousand will be better and the two thousand will be almost will be faster and almost as good . professor b: , thirty hours is like a hundred and ten thousand seconds . , so that 's like eleven million frames . and a two thousand hidden unit net is i about seven , eight hundred thousand parameters . so that 's probably that 's probably fine . a four thousand is within the range that you could benefit from but the two thousand 'd be faster professor b: alright . uncle bernie 's rule is ten to one . bernie woodrow 's rule of uncle bernie professor b: nah . since we have nothing to talk about we only talked for an hour . professor b: de - ba - de . de - ba - de that 's all folks !
this is a relatively short meeting of the meeting recorder group , with only a few agenda items. transcription was discussed briefly because jane was not present , however this appears to be progressing well in parallel with ibm. web pages have been set up to show transcription status and to allow participants to approve transcripts. darpa demos are progressing well with the back-end indexed to allow front-end filtering , and a potential demo ideas investigated which would use x waves. transcriber is now working for windows , however live pitch contours may not work in the time available. backed-up disk space is now fine , however temporary space is running out fast. interim measures are discussed while sysadmin are away. improvement has been made in the final version of the plp , which shows better female performance , and combined with mel ceptra offers 1.5% improvement. digit performance also improved thanks to training using scripted speech data. progress has also been made in sri alignment for tandem system. the group note that the annual report needs to be worked on for next week , and it is also suggested to hold recognition meetings separately , however these issues will be discussed in more detail at the next meeting. jane will be contacted to see whether transcribers could work on a small amount of smartkom data , in order to get the project moving. details of the transcription status web page will be sent to the group. participants will be contacted to approve transcripts or suggest edits , however this depends upon decisions regarding disk space and ( un )compression of audio files. there are disk space problems regarding scratch space , although the group decide that a solution about adding extra drives cannot be resolved without talking to sysadmin. until they can be contacted , the group will use the hard drives on each other's machines. displaying pitch contours live in transcriber would be desirable for the demo , however , if the group run out of time , the group note that they could generate this statically using powerpoint. the group decide that it may be more beneficial to have separate meetings to discuss recognition , since jane and liz are less likely to be interested in this. they will discuss this with them before going ahead. scratch disk space used to store the uncompressed meetings when they are being processed is getting short - the group are using 98%. various options are suggested about adding extra drives , externally to their machines , or to the server , but these cannot be resolved until after the relevant sysadmin person returns from a break in 10 days time. time is getting tight for some of the demo development , especially a lack of tcl-tk skills slowing up back-end progress. additionally , time constraints may impinge upon the "live-ness" of the transcriber pitch contour demo. annual report is due next week , so this will need to be discussed at the next meeting. the group are still waiting for ibm to return transcripts next week. this is not holding the group up , since parallel transcription is being used , and new transcribers have been recruited. the web page detailing transcription status is working and being updated when possible. backed-up disk space has been seen as a priority by sysadmin , and the group are now only using 30% of this. darpa demos are progressing , with indexing added to the back-end , which will allow the front end to do filtering. transcriber has been developed to run on a windows machine , with further development being undertaken to add a pitch contour display. there is also the potential for a demo to be developed from x waves. the final version of plp has been tested which shows similar performance as the system with the mel ceptra front-end ( although performance for males is slightly , but non-significantly better ). combining the two systems gives a 1.5% improvement in performance. there has also been some improvement with digits , by training the system on read speech data , rather than training solely on ti digits or timit data. progress is being made with sri system alignment , with label files converted to train the net.
###dialogue: grad a: ok , we 're on . so , this is gon na be a pretty short meeting because i have four agenda items , three of them were requested by jane who is not gon na be at the meeting today . so . the first was transcription status . does anyone besides jane the transcription status is ? phd f: first of all with ibm i got a note from brian yesterday saying that they finally made the tape for the thing that we sent them a week or week and a half ago phd f: and that it 's gone out to the transcribers and hopefully next week we 'll have the transcription back from that . phd f: jane seems to be moving right along on the transcriptions from the icsi side . she 's assigned , probably five or six m more meetings . phd f: and we ' ve run out of e d us because a certain number of them are , awaiting to go to ibm . phd d: so does she have transcribers right now who are sitting idle because there 's no data back from ibm phd d: because i need to ask jane whether it 's it would be ok for her , s some of her people to transcribe some of the initial data we got from the smartkom data collection , which is these short like five or seven minute sessions . phd d: and we want it , we need the again , we have a similar logistic set - up where we are supposed to send the data to munich phd d: and get it transcribed and get it back . but to get going we would like some of the data transcribed right away so we can get started . phd d: and so i wanted to ask jane if , maybe one of their transcribers could do since these are very short , that should really be , phd c: there 's only two channels . so it 's only as the synthesis does n't have to be transcribed . phd d: so it 's one channel to transcribe . and it 's one session is only like seven professor b: so that should have ma many fewer and it 's also not a bunch of interruptions with people and all that , phd d: right . and some of it is read speech , so we could give them the thing that they 're reading phd d: and so , i since she 's i was gon na ask her but since she 's not around i maybe i 'll phd d: if that 's ok with you to , get that to ask her for that , then i 'll do that . professor b: , if we 're held up on this other a little bit in order to encompass that , that 's ok because i , i still have high hopes that the ibm pipeline 'll work out for us , so it 's phd f: , and also related to the transcription , so i ' ve been trying to keep a web page up to date f showing what the current status is of the trans of all the things we ' ve collected and what stage each meeting is in , in terms of whether it 's phd f: - , i will . i that 's the thing that i sent out just to foo people saying can you update these pages phd f: and so that 's where i ' m putting it but i 'll send it out to the list telling people to look at it . grad a: , i have n't done that . so . i have lots of to add that 's just in my own directory . i 'll try to get to that . ok . so jane also wanted to talk about participant approval , but i do n't really think there 's much to talk about . i ' m just gon na do it . and , if anyone objects too much then they can do it instead . grad a: i ' m gon na send out to the participants , with links to web pages which contain the transcripts and allow them to suggest edits . and then bleep them out . phd f: so , the audio that they 're gon na have access to , will that be the uncompressed version ? or will you have scripts that like uncompress the various pieces and grad a: , that 's a good point . that 's a good point . , it 's probably going to have to be the uncompressed versions because , it takes too long to do random access decompression . phd f: , i was just wondering because we 're running out of the un - backed - up disk space on grad a: so , but that is a good point so we 'll get to that , too . , darpa demo status , not much to say . the back - end is working out fine . it 's more or less ready to go . i ' ve added some that indes indexes by the meeting type mr , edu , et cetera and also by the user id . so that the front - end can then do filtering based on that as . the back - end is , going more slowly as i s i said before just cuz i ' m not much of a tcl - tk programmer . and dave gelbart says he 's a little too busy . so don and i are gon na work on that and you and just talk about it off - line more . grad a: but the back - end was pretty smooth . so , we 'll have something . it may not be as as pretty as we might like , but we 'll have something . professor b: i wondered whe when we would reach dave 's saturation point . he 's been volunteering for everything and grad a: , he actually he volunteered but then he s then he retracted it . so . grad e: and , also , i was just showing andreas , i got an x waves display , and i how much more we can do with it with like the prosodic where we have like stylized pitches and signals and the transcripts on the bottom grad e: so , right now it 's just an x waves and then you have three windows but i , it looked pretty and i ' m it think it has potential for a little something , professor b: ok , so again , the issue is for july , the issue 's gon na be what can we fit into a windows machine , and so on , but phd c: i ' ve been putting together transcriber things for windows so i and i installed it on dave gelbart 's pc and it worked just fine . so hopefully that will work . phd d: really ? so is that because there 's some people it would be if we could get that to work at sri because the phd c: but but the problem is the version transcriber works with , the snack version , is one point six whatever and that 's not anymore supported . it 's not on the web page anymore . but wrote an email to the author of to the snack author and he sent me to one point six whatever library phd c: and so it works . , but then you ca n't add our patches and then the new version is different a and in , in terms of the source code . you you ca n't find the tcl files anymore . it 's some whatever wrapped thing and you ca n't access that so you have to install first install tcl then install snack and then install the transcriber thing and then do the patches . phd d: i wonder if we should contribute our changes back to the authors so that they maintain those changes along phd c: no , i have n't done that yet . i ' m nope . but i definitely will do that . professor b: so , can some of the that don 's talking about somehow fit into this , mean you just have a set of numbers that are associated with the grad e: , it 's ascii files or binary files , whatever representation . just three different it 's a waveform and just a stylized pitch vector so it 's grad e: we could do it in matl - you could do it in a number of different places i ' m . phd d: but it would be if the transcriber interface had like another window for the , maybe above the waveform where it would show some arbitrary valued function that is time synchron ti time synchronous with the wavform . grad a: it 'd be easy enough to add that . again it 's it 's more tcl - t so someone who 's familiar with tcl - tk has to do it , but , it would n't be hard to do . grad a: it does n't seem like having that real time is that necessary . so yo it seems to me you could do images . grad e: it would be to see it it would be like to see to hear it and see it , grad e: right , right . thought if you meant slides you meant like just like view graphs . professor b: , wh so . , no , we 're talking about on the computer and , when we were talking about this before we had littl this little demo meeting , we set up a range of different degrees of liveness that you could have and , the more live , the better , but , given the crunch of time , we may have to retreat from it to some extent . so for a lot of reasons , it would be very to have this transcriber interface be able to show some other interesting signal along with it so it 'd be a good thing to get in there . but , anyway , jus just looking for ways that we could actually show what you 're doing , in to people . cuz a lot of this , particularly for communicator , certainly a significant chunk of the things that we waved our arms about th originally had t had to do with prosodics it 'd be to show that we can actually get them and see them . grad a: and the last i item on the agenda is disk issues yet again . so , we 're doing ok on backed up . we 're only about thirty percent on the second disk . so , we have a little bit of time before that becomes critical , but we are like ninety five percent , ninety eight percent on the scratch disks for the expanded meetings . and , my original intention was like we would just delete them as we needed more space , but unfortunately we 're in the position where we have to deal with all the meeting data all at once , in a lot of different ways . grad a: , there 're a lot of transcribers , so all of those need to be expanded , and then people are doing chunking and i want to do , the permission forms , grad a: so i want those to be live , so there 's a lot of data that has to be around . and jane was gon na talk to , dave johnson about it . one of the things i was thinking is we just got these hundred alright , excuse me ten , sparc - blade sun - blades . grad a: they came in but they 're not set up yet . and so it seems to me we could hang scratch disk on those because they 'll be in the machine room , they 'll be on the fast connection to the rest of the machines . and if we just need un - backed - up space , we could just hang disks off them . grad a: , but the sun - blades have spare drive bays . just put them in . grad a: cuz the sun , these sun - blades take commodity hard drives . so you can just go out and buy a pc hard drive and stick it in . professor b: but if abbott is going to be our disk server it file server it seems like we would want to get it , a second disk rack . grad a: , there are lots of long term solutions . what i ' m looking for is where do we s expand the next meeting ? professor b: , for the next meeting you might be out of luck with those ten , might n't you ? , dave johnson is gone for , like , ten days , grad e: i ' m not doing anything on it right now until i get new meetings to transcri or that are new transcriptions coming in i really ca n't do anything . not that i ca n't do anything , i jus phd f: i jus gave thilo some about ten gigs , the last ten gigs of space that there was on abbott . and so but that but phd d: xg ? that 's also where we store the the hub - five training set waveforms , phd f: i do n't think that 's on xg . on xg is only carmen and du - and stephane 's disk . phd d: but i ' ve also been storing i ' ve been storing the feature files there and i s start deleting some because we now the best features are and we wo n't be using the old ones anymore . grad e: i have a lot of space and it 's not it 's n there 's very little not for long . grad a: , it 's probably probably only about four gig is on x on your x drive , professor b: there should i d there should just be a b i should have a button . professor b: just press press each meeting saying " we need more disk space " this week " . skip the rest of the conversation . phd c: it 's a little bit more as i usually do n't do not uncompress the all of the pzm and the pda things . phd f: is a little more ? right , so if you uncompressed everything it 's even more . phd f: about half ? so we 're definitely are storing , all of those . so there 's what thirty some gig of just meetings so far ? professor b: so - so so maybe there 's a hundred gig . or . cuz we have the uncompressed around also . so it 's like professor b: it 's the they really are cheap . it 's just a question of figuring out where they should be and hanging them , but but , we could , if you want to get four disks , get four disks . it 's small these things ar are just a few hundred dollars . phd f: i sent that message out to , i , you and dave asking for if we could get some disk . i s i sent this out a day ago phd f: but and dave did n't respond so i how the whole process works . does he just go out and get them and if it 's ok , and so i was assuming he was gon na take over that . but he 's probably too busy given that he 's leaving . professor b: , you need a direct conversation with him . and just say an - e just ask him that , wha what should you do . and in my answer back was " are you just want one ? " so that what you want to do is plan ahead a little bit and figure " , here 's what we pi figure on doing for the next few months " . grad a: wa - a i they want . the sysadmins would prefer to have one external drive per machine . so they do n't want to stack up external drives . and then they want everything else in the machine room . so the question is where are you gon na hang them ? professor b: so this is a question that 's pretty hard to solve without talking to dave , phd f: part of the reason why dave ca n't get the new machines up is because he does n't have room in the machine room right now . phd d: one on - one thing to in to t to do when you need to conserve space is phd d: i bet there are still some old , like , nine gig disks , around and you can probably consolidate them onto larger disks and recover the space . professor b: no . dave knows all these things , . an - and so , he always has a lot of plans of things that he 's gon na do to make things better in many ways an and runs out of time . grad a: but i know that generally their first priority has been for backed up disk . and so what he 's been concentrating on is the back up system , rather than on new disk . which professor b: but this is a very specific question for me . , we can easily get one to four disks , you just go out and get four and we ' ve got the money for it , it 's no big deal . , but the question is where they go , and i do n't think we can solve that here , you just have to ask him . phd d: to the machine that collects the data . so then you could , at least temporarily , store there . grad a: , it 's just it 's not on the net , so it 's a little awkward grad a: it 's behind lots of fire walls that do n't allow any services through except s grad a: and also on the list is to get it into the normal icsi net , but who knows when that will happen ? grad a: no , the problem with that is that they do n't currently have a wire running to that back room that goes anywhere near one of the icsi routers . professor b: , e again , any one of these things is certainly not a big deal . if there was a person dedicated to doing it they would happen pretty easily but it 's jus every ever everybody has a has professor b: all of us have long lists of different things we 're doing . but at any rate that there 's a longer term thing and there 's immediate need and we need a conversation with , maybe after tea you and go down and talk to him about it just say " wha , what should we do right now ? " professor b: let 's see . the only oth thing other thing i was gon na add was that , i talked briefly to mari and we had both been busy with other things so we have n't really connected that much since the last meeting we had here but we that we would have a telephone meeting the friday after next . and i wanted to make it , after the next one of these meetings , so something that we wanna do next meeting is to put together , a reasonable list for ourselves of what is it , that we ' ve done . just bulletize o e do i can dream up text but this is gon na lead to the annual report . so if w phd d: is this got ta be in the morning ? or because i fridays i have to leave like around two . so if it could be before that would be professor b: no , no but i do n't need other folks for the meeting . do it . a all i ' m saying is that on professor b: so what i meant was on the me this meeting if i wa something i ' m making a major thing in the agenda is i wanna help in getting together a list of what it is that we ' ve done so tell her . professor b: i have a pretty good idea , and then the next day , late in the day i 'll be having that discussion with her . so . phd d: one thing we in past meetings we had also a various variously talked about the work that w was happening on the recognition side but is n't necessarily related to meetings specifically . so . and i wondered whether we should maybe have a separate meeting and between , whoever 's interested in that because i feel that there 's plenty of to talk about but it would be maybe the wrong place to do it in this meeting if , it 's that it 's just gon na be ver very boring for people who are not , really interested in the details of the recognition system . professor b: , ok , so how many people here would not be interested in a meeting about recognition ? phd d: i know , jane an you mean in a separate meeting or ha talking about it in this phd d: it 's when the talk is about data collection , sometimes i ' ve , i ' m bored . phd d: so it 's i c sympathize with them not wanting to i to be if i cou this could professor b: it 's cuz y you have a so you need a better developed feminine side . professor b: there 's probably gon na be a lot of " bleeps " in this meeting . professor b: it must be nearing the end of the week . i , i ' ve heard some comments about like this . that m could be . the . u phd d: we could do it every other week or so . , whatev or whenever we feel like we phd d: we could do that , . i personally i 'd i ' m not in favor of more meetings . because , . phd f: but i do i do n't a lot of times lately it seems like we do n't really have enough for a full meeting on meeting recorder . professor b: and then if we find , we 're just not getting enough done , there 's all these topics not coming up , then we can expand into another meeting . but that 's a great idea . so . let 's chat about it with liz and jane when we get a chance , see what they think and phd f: that would be good . andreas and i have various talks in the halls and there 's lots of things , details and that would people 'd be interested in and i 'd , where do we go from here things and so , it would be good . professor b: , and you 're attending the front - end meeting as the others so you have probably one of the best you and i , i are the main ones who see the bridge between the two . phd d: so . so so we could talk a little bit about that now if there 's some time . phd d: i jus so the latest result was that yot i tested the final version of the plp configuration on development test data for this year 's hub - five test set . and the recognition performance was exactly , and exactly up to the , the first decimal , same as with the mel cepstra front - end . phd d: i overall . they were the males were slightly better and the females were slightly worse but nothing really . definitely not significant . and then the really thing was that if we combine the two systems we get a one and a half percent improvement . phd d: t with n - best rover , which is like our new and improved version of rover . phd d: which u actually uses the whole n - best list from both systems to mmm , c combine that . professor b: so except the only key difference between the two really is the smoothing at the end which is the auto - regressive versus the cepstral truncation . phd d: and so after i told the my colleagues at sri about that , now they definitely want to , , have a next time we have an evaluation they want to do , a at least the system combination . , and , why not ? phd d: w what do you mean ? more features in the sense of front - end features or in the sense of just bells and whistles ? grad a: no , front - end features . we did plp and mel cepstra . let 's , try rasta and msg , and phd d: so , we cou that 's the there 's one thing you do n't want to overdo it because y every front - end , if you multiply your effort by n , where n is a number of different systems and . so one compromise would be to only to have the everything up to the point where you generate lattices be one system and then after that you rescore your lattices with the multiple systems and combine the results and that 's a fairly painless thing . phd d: so . maybe a little less because at that point the error rates are lower and so if , maybe it 's only one percent but that would still be worthwhile doing . jus - , just wanted to let that 's working out very nicely . and then we had some results on digits , with we so this was really just to get dave going with his experiments . and so , . but as a result , , we were wondering why is the hub - five system doing so on the digits . and the reason is there 's a whole bunch of read speech data in the hub - five training set . phd d: and you c and not all of no it 's actually , digits is only a maybe a fifth of it . phd d: the rest is read timit data and atis data and wall street journal and like that . professor b: , so that 's actually not that different from the amount of training that there was . phd d: but it definitely helps to have the other read data in there because we 're doing phd d: the error rate is half of what you do if you train only on ti timit not timit ti - digits , which is only what two hours something ? phd d: so . , more read speech data definitely helps . and you can leave out all the conversational data with no performance penalty . professor b: because because , it was apparent if you put in a bunch more data it would be better , phd d: so we only for the hub - five training , we 're only using a fairly small subset of the macrophone database . , so , you could beef that up and probably do even better . grad a: i could also put in focus condition zero from hub - four from broadcast news , which is mostly prepared speech . it 's not exactly read speech but it 's pretty darn close . phd d: , that 's plenty of read speech data . , wall street journal , take one example . phd d: but . so , that might be useful for the people who train the digit recognizers to use something other than ti - digits . professor b: they been using timit . that . they they experimented for a while with a bunch of different databases with french and spanish and cuz they 're multilingual tests and , and actually the best results they got wa were using timit . but which so that 's what they 're using now . but but certainly if we , if we knew what the structure of what we 're doing there was . there 's still a bunch of messing around with different kinds of noise robustness algorithms . so we exactly which combination we 're gon na be going with . once we know , then the trainable parts of it 'd be great to run lots of through . phd d: , that was that . and then i th chuck and i had some discussions about how to proceed with the tandem system and you wanna see where that stands ? phd f: , i ' m , so andreas brought over the alignments that the sri system uses . and so i ' m in the process of converting those alignments into label files that we can use to train a new net with . and so then i 'll train the net . and . phd d: an - and one side effect of that would be that it 's that the phone set would change . so the mlp would be trained on only forty - six or forty - eight phd d: forty - eight phones ? which is smaller than the phone set that we ' ve been using so far . and that will probably help , actually , phd d: because the fewer dimensions e the less trouble probably with the as far as just the , just we want to try things like deltas on the tandem features . and so you h have to multiply everything by two or three . and so , fewer dimensions in the phone set would be actually helpful just from a logistics point of view . professor b: although we , it 's not that many fewer and we take a klt anyway so we could phd d: exactly . so so that was the other thing . and then we wanted to s just limit it to maybe something on the same order of dimensions as we use in a standard front - end . so that would mean just doing the top i ten or twelve of the klt dimensions . professor b: , and we sh again check we should check with stephane . my impression was that when we did that before that had very little he did n't lose very much . phd d: but then and then something once we have the new m l p trained up , one thing i wanted to try just for the fun of it was to actually run like a standard hybrid system that is based on , those features and retrain mlp and also the , the dictionary that we use for the hub - five system . professor b: and the b and the base u starting off with the base of the alignments that you got from i from a pretty decent system . phd d: so that would give us a , more hopefully a better system because , compared to what eric did a while ago , where he trained up , a system based on broadcast news and then tra retraining it on switchboard or s and but he d he did n't he probably did n't use all the training data that was available . and his dictionary probably was n't as tuned to conversational speech as the as ours is . phd d: and the dictionary made a huge difference . we we made some improvements to the dictionary 's to the dictionary about two years ago which resulted in a something like a four percent absolute error rate reduction on switchboard , which professor b: the other thing is , dipping deep into history and into our resource management days , when we were collaborating with sri before , it was , it is was a really key starting point for us that we actually got our alignment . when we were working together we got our initial alignments from decipher , at the time . and . later we got away from it because once we had decent systems going then it was typically better to use our own systems cuz they were self consistent but certainly to start off when we were trying to recover from our initial hundred and forty percent error rate . but that was a good way to start . and we 're not quite that bad with our switchboard systems but it was they certainly are n't as good as sri 's , so phd f: w what is the performance on s the best switchboard system that we ' ve done ? roughly ? professor b: , the hybrid system we never got better than about fifty percent error . and it was there 's just a whole lot of things that no one ever had time for . we never did really fix up the dictionary . we always had a list of a half dozen things that we were gon na do and a lot of them were pretty simple and we never did . , we never did an never did any adaptation phd d: and that number was on switchboard - one data , where the error rate now is in the twenties . so , . phd d: that 's yet s so it would be good t to r re just at least to give us an idea of how the hybrid system would do . professor b: but again it 's it 's the conver it 's the s conversational speech bit . because our broadcast news system is actually pretty good . he knows . phd d: and the other thing that would help us to evaluate is to see how the m l p is trained up . right ? because it 's a pretty good indicator of that . so it 's a sanity check of the m l p outputs before we go ahead and train up the , use them as a basis for the tandem system . professor b: it 'll still probably be worse . , it 's it 'd be context independent and so on . phd d: , we were n't whether it 's worth to just use the alignments from the s r i recognizer or whether to actually go through one or more iterations of embedded training where you realign . grad a: try it . you run it ? keep keep both versions ? see which one 's better ? professor b: , . i agree with ad you would then you proceed with the embedded training . it 's gon na take you a while to train at this net anyway . and while it 's training you may as test the one you have and see how it did . phd d: but i but in your experience have you seen big improvements in s on some tasks with embedded training ? or was it small - ish improvements that you got professor b: . it depended on the task . in this one i would expect it to be important because we 're coming from , alignments that were achieved with an extremely different system . grad a: although , we ' ve done it with when we were combining with the cambridge recurrent neural net , embedded training made it worse . which i ' ve never figured out . phd d: so you started training with outputs from a with alignments that were generated by the cambridge system ? and then . , that might probably just that was probably because your initial system your system was ba worse than cambridge 's . and you . phd d: no it 's weird that it did i ' m . it 's w it 's weird that it got worse . professor b: . no . tha - u we ' ve see and wi with the numbers ogi numbers task we ' ve seen a number of times people doing embedded trainings and things not getting better . phd d: actually it 's not that weird because we have seen we have seen cases where acoustic retraining the acoustic models after some other change made matters worse rather than better . professor b: it just but i would suspect that something that had a very different feature set , they were using pretty diff similar feature sets to us . i would expect that something that had a different feature set would benefit from professor b: , a minute , and the other thing , it was the other thing is that what was in common to the cambridge system and our system is they both were training posteriors . so , that 's another pretty big difference phd d: you mean with soft targets ? or ? , i ' m sor i missed what 's the key issue here ? professor b: , that both the cambridge system and our system were training posteriors . and if we 're coming from alignments coming from the sri system , it 's a likelihood - based system . so so that 's another difference . , there 's diffe different front - end different , training criterion , i would think that in a that an embedded training would have at least a good shot of improving it some more . but we . you gon na say something ? phd d: , that has about i you 'd would be gender - dependent training , so it 's that 's about mmm , something like thirty hours . professor b: at least a couple thousand hidden units . it 's it 's th the thing i 'll think about it a little more but it 'd be toss up between two thousand and four thousand . you definitely would n't want the eight thousand . it 's m it 's more than professor b: let me think about it , but that th at some point there 's diminishing returns . it does n't actually get worse , typically , but it but there is diminishing returns and you 're doubling the amount of time . phd d: remember you 'll have a smaller output layer so there 's gon na be fewer parameters there . grad a: right , because you used the context windows and so the input to hidden is much , much larger . professor b: , so it 's it 'd be way , way less than ten percent of the difference . there 's how bi how big what am i trying to think of ? phd f: the net that we did use already was eight thousand hidden units and that 's the one that eric trained up . professor b: so that would be like trained on s sixty or seventy hours . so , definitely not the one thousand two thousand fr the four thousand will be better and the two thousand will be almost will be faster and almost as good . professor b: , thirty hours is like a hundred and ten thousand seconds . , so that 's like eleven million frames . and a two thousand hidden unit net is i about seven , eight hundred thousand parameters . so that 's probably that 's probably fine . a four thousand is within the range that you could benefit from but the two thousand 'd be faster professor b: alright . uncle bernie 's rule is ten to one . bernie woodrow 's rule of uncle bernie professor b: nah . since we have nothing to talk about we only talked for an hour . professor b: de - ba - de . de - ba - de that 's all folks ! ###summary: this is a relatively short meeting of the meeting recorder group , with only a few agenda items. transcription was discussed briefly because jane was not present , however this appears to be progressing well in parallel with ibm. web pages have been set up to show transcription status and to allow participants to approve transcripts. darpa demos are progressing well with the back-end indexed to allow front-end filtering , and a potential demo ideas investigated which would use x waves. transcriber is now working for windows , however live pitch contours may not work in the time available. backed-up disk space is now fine , however temporary space is running out fast. interim measures are discussed while sysadmin are away. improvement has been made in the final version of the plp , which shows better female performance , and combined with mel ceptra offers 1.5% improvement. digit performance also improved thanks to training using scripted speech data. progress has also been made in sri alignment for tandem system. the group note that the annual report needs to be worked on for next week , and it is also suggested to hold recognition meetings separately , however these issues will be discussed in more detail at the next meeting. jane will be contacted to see whether transcribers could work on a small amount of smartkom data , in order to get the project moving. details of the transcription status web page will be sent to the group. participants will be contacted to approve transcripts or suggest edits , however this depends upon decisions regarding disk space and ( un )compression of audio files. there are disk space problems regarding scratch space , although the group decide that a solution about adding extra drives cannot be resolved without talking to sysadmin. until they can be contacted , the group will use the hard drives on each other's machines. displaying pitch contours live in transcriber would be desirable for the demo , however , if the group run out of time , the group note that they could generate this statically using powerpoint. the group decide that it may be more beneficial to have separate meetings to discuss recognition , since jane and liz are less likely to be interested in this. they will discuss this with them before going ahead. scratch disk space used to store the uncompressed meetings when they are being processed is getting short - the group are using 98%. various options are suggested about adding extra drives , externally to their machines , or to the server , but these cannot be resolved until after the relevant sysadmin person returns from a break in 10 days time. time is getting tight for some of the demo development , especially a lack of tcl-tk skills slowing up back-end progress. additionally , time constraints may impinge upon the "live-ness" of the transcriber pitch contour demo. annual report is due next week , so this will need to be discussed at the next meeting. the group are still waiting for ibm to return transcripts next week. this is not holding the group up , since parallel transcription is being used , and new transcribers have been recruited. the web page detailing transcription status is working and being updated when possible. backed-up disk space has been seen as a priority by sysadmin , and the group are now only using 30% of this. darpa demos are progressing , with indexing added to the back-end , which will allow the front end to do filtering. transcriber has been developed to run on a windows machine , with further development being undertaken to add a pitch contour display. there is also the potential for a demo to be developed from x waves. the final version of plp has been tested which shows similar performance as the system with the mel ceptra front-end ( although performance for males is slightly , but non-significantly better ). combining the two systems gives a 1.5% improvement in performance. there has also been some improvement with digits , by training the system on read speech data , rather than training solely on ti digits or timit data. progress is being made with sri system alignment , with label files converted to train the net.
20
grad g: hi . we ' ve met before , like , i remember talking to you about aspect like that at some point or other . grad c: so . for those who everyone knows me , this is great . , apart from that , the old gang , johno and bhaskara have been with us from day one grad c: and they 're engaged in various activities , some of which you will hear about today . ami is our counselor and spiritual guidance and also interested in problems concerning reference of the more complex type , and he sits in as a interested participant and helper . is that a good characterization ? grad c: and hopefully it is by means of keith that we will be able to get a b a better formal and a better semantic idea of what a construction is and how we can make it work for us . additionally his interest surpasses english because it also entails german , an extra capability of speaking and writing and understanding and reading that language . and , is there anyone who does n't know nancy ? do you do nancy ? grad g: i did n't know . i did n't mean to be humor copying , but ok , yes , i know myself . it 's ok . it 's a grad c: officially , but in reality already much longer and next to some more or less bureaucratic with the data collection she 's also the wizard in the data collection , grad c: ok , why do n't we get started on that subject anyways . , so we 're about to collect data and the s the following things have happened since we last met . when will we three meet again ? and grad c: what happened is that , " a " , there was some confusion between you and jerry with the that leading to your talking to catherine snow , and he was he completely that some something confusing happened . his idea was to get the l the lists of mayors of the department , the students . it it 's exactly how you interpreted it , s grad c: majors and just sending the little write - up that we did on to those email lists undergrad d: ok . , . but so it was really carol snow who was confused , not me and not jerry . undergrad d: that 's good . so i should still do that . and using the thing that you wrote up . grad c: wonderful . and we have a little description of asking peop subjects to contact fey for recruiting them for our thing and there was some confusion as to the consent form , which is that what you just signed grad g: did jerry talk to you about maybe using our class ? the students in the undergrad class that he 's teaching ? grad c: however there is always more people in a facul in a department than are just taking his class or anybody else 's class at the moment and one should reach out and try and get them all . grad g: ok , but th i it 's that people in his class cover a different set so than the c is the cogsci department that you were talking about ? undergrad d: that 's what i suggested to him , that people like jerry and george and et cetera just grad g: so if there 's something that you 're sending out you can also s send me a copy , me or bhaskara could either of us could post it to is it grad g: if it 's a general solicitation that is just contact you then we can pro post it to the news group so . grad c: how however i suggest that if you look at your email carefully you may think you may find that you already have it . grad c: anyhow , the , not only also we will talk about linguistics and computer science . and then , secondly , we had , you may remember , the problem with the re - phrasing , that subject always re - phrase the task that we gave them , grad c: and so we had a meeting on friday talking about how to avoid that , and it proved finally fruitful in the sense that we came up with a new scenario for how to get the subject m to really have intentions and to act upon those , there the idea is now that next actually we need to hire one more person to actually do that job because it 's getting more complicated . so if anyone interested in what i ' m about to describe , tell that person to write a mail to me or jerry soon , fast . the idea now is to come up with a high level of abstract tasks go shopping " take in a batch of art " visit bed006cdialogueact252 437215 438285 c grad s : s -1 0 do some sightseeing blah - blah , analogous to what fey has started in compiling here and already she has already gone to the trouble of anchoring it with specific o entities and real world places you will find in heidelberg . and . so out of these f s these high level categories the subject can pick a couple , such as if there is a cop a category in emptying your roll of film , the person can then decide " ok , i wanna do that at this place " , make up their own itinerary a and tasks and the person is not allowed to take this h high level category list with them , but the person is able to take notes on a map that we will give him and the map will be a tourist 's schematic representation with symbols for the objects . and so , the person can maybe make a mental note that " i wanted to go shopping here " and " i wanted to maybe take a picture of that " and " maybe eat here " and then goes in and solves the task with the system , ie fey , and we 're gon na try out that any questions ? grad g: so y you 'll have those say somewhere what their intention was so you still have the thing about having data where what the actual intention was ? grad g: but they will there 's nothing that says " these are the things you want to do " so they 'll say " these are the things i want to do " and right , so they 'll have a little bit more natural interaction ? grad f: so they 'll be given this map , which means that they wo n't have to like ask the system for in for like high level information about where things are ? grad c: it 's a schematic tourist map . so it 'll be i it 'll still require the that information and an grad g: it w it does n't have like streets on it that would allow them to figure out their way grad e: so you 're just saying like what part of town the things are in or whatever ? grad c: a the map is more a means for them to have the buildings and their names and maybe some ma major streets and their names and we want to maybe ask them , if you have get it isolated street the , whatever , river street , and they know that they have decided that , yes , that 's where they want to do this action that they have it with them and they can actually read them or have the label for the object because it 's too hard to memorize all these st strange german names . and then we 're going to have another we 're gon na have w another trial run ie the first with that new setup tomorrow at two and we have a real interesting subject which is ron for who those who know him , he 's the founder of ici . so he 'll he 's around seven seventy years old , . grad c: and he also approached me and he offered to help our project and he was more thinking about some high level thinking tasks and i said " we need help you can come in as a subject " and he said " ok " . so that 's what 's gon na happen , tomorrow , data . grad c: new new set up . which i 'll hopefully scrape together t but , to fey , we already have a blueprint and work with that . questions ? comments on that ? if not , we can move on . no ? no more questions ? grad g: like so so it 's just based on like the materials you had about heidelberg . grad c: talk to a machine and it breaks down and then the human comes on . the question is just how do we get the tasks in their head that they have an intention of doing something and have a need to ask the system for something without giving them a clear wording or phrasing of the task . grad c: because what will happen then is that people repeat , or as much as they can , of that phrasing . grad g: , are you worried about being able to identify the goals that we ' ve d you guys have been talking about are this these identifying which of three modes their question concerns . so it 's like the enter versus view grad c: , we will get a protocol of the prior interaction , right ? that 's where the instructor , the person we are going to hire , and the subjects sit down together with these high level things and so th the q first question for the subject is , " so these are things , we thought a tourist can do . is there anything that interests you ? " and the person can say " , sh this is something i would do . i would go shopping " . and then we can this s instructor can say " , then you may want to find out how to get over here because this is where the shopping district is " . grad g: so the interaction beforehand will give them hints about how specific or how whatever though the kinds of questions that are going to ask during the actual session ? grad c: no . just ok , what would you like to buy and then ok there you wanna buy a whatever cuckoos clocks ok and the there is a store there . grad c: so the task then for that person is t finding out how to get there , that 's what 's left . and we know that the intention is to enter because we know that the person wants to buy a cuckoos clock . grad g: ok , that 's what so like those tasks are all gon na be unambiguous about which of the three modes . right . ok . phd a: , so the idea is to try to get the actual phrasing that they might use and try to interfere as little as possible with their choice of words . grad c: yes . in a sense that 's exactly the idea , which is never possible in a s in a lab situation , phd a: , u the one experiment th that i ' ve read somewhere , it was they u used pictures . grad c: we had exactly that on our list of possible way things so we i even made a silly thing how that could work , how you control you are here you want to know how to get someplace , and this is the place and it 's a museum and you want to do some and there 's a person looking at pictures . so , this is exactly getting someplace with the intention of entering and looking at pictures . grad c: however , not only was the common census were among all participants of friday 's meeting was it 's gon na be very laborious to make these drawings for each different things , all the different actions , if possible , and also people will get caught up in the pictures . so all of a sudden we 'll get descriptions of pictures in there . and people talking about pictures and pictorial representations i would s i would still be willing to try it . phd a: , i ' m not saying it 's necessary but i you might be able to combine text and some picture and also it will be a good idea to show them the text and chew the task and then take the test away the text away so that they are not guided by what you wrote , grad c: they will have no more linguistic matter in front of them when they enter this room . then i suggest we move on to the to we have the edu project , let me make one more general remark , has two side actions , its action items that we 're do dealing with , one is modifying the smartkom parser and the other one is modifying the smartkom natural language generation module . and this is not too complicated but i ' m just mentioning it put it in the framework because this is something we will talk about now . , i have some news from the generation , do you have news from the parser ? grad f: yes , i would really p it would be better if i talked about it on friday . if that 's ok . grad c: wonderful . , did you run into problems or did you run into not h having time ? grad c: and i do have some good news for the natural language generation however . and the good news is i it 's done . , meaning that tilman becker , who does the german one , actually took out some time and already did it in english for us . and so the version he 's sending us is already producing the english that 's needed to get by in version one point one . grad f: so i take it that was similar to the what we did for the parsing ? grad c: i it even though the generator is a little bit more complex and it would have been , not changing one hundred words but maybe four hundred words , but it would have been but this is i good news , and the time and especially bhaskara and do i have it here ? the time is now fixed . it 's the last week of april until the fourth of may so it 's twenty - sixth through fourth . that they 'll be here . so it 's extremely important that the two of you are also present in this town during that time . grad g: w it does n't really have much meaning to grad students but final projects might . grad b: i 'll be here working on something . guaranteed , it 's just will i be here , in i 'll be here too actually but grad c: and ask them more questions and sit down together and write sensible code and they can give some talks and . but grad b: but it 's not like we need to be with them twenty - four hours a day s for the seven days that they 're here . grad c: not unless you really want to . and they 're both guys so you may want to . ok , that much from the parser and generator side , unless there are more questions on that . grad c: and i was completely flabbergasted here and i and that 's also it 's going to produce the concept - to - speech blah - blah information for necessary for one point one in english based on the english , in english . i was like " ok , grad g: so that was like one of the first l , the first task was getting it working for english . so that 's over now . is that right ? so the basic requirement fulfilled . grad c: , the basic requirement is fulfilled almost . when andreas stolcke and his gang , when they have changed the language model of the recognizer and the dictionary , then we can actually a put it all together grad c: and then when if something actually happens and some answers come out , then we 're done . grad g: are they is it using the database ? the german tv movie . so all the actual data might be german names ? grad c: the ok , so you how the german dialogue the german the demo dialogue actually works . it works the first thing is what 's , showing on tv , and then the person is presented with what 's running on tv in germany on that day , on that evening grad c: and so you take one look at it and then you say " that 's really nothing there 's nothing for me there " what 's running in the cinemas ? so maybe there 's something better happening there . and then you get you 're shown what movies play which films , and it 's gon na be all the heidelberg movies and what films they are actually showing . and most of them are going to be hollywood movies . so , " american beauty " is " american beauty " , grad g: it 's a so would the generator , like the english language sentence of it is " these are the follow the following films are being shown " like that ? grad c: , but it in that sense it does n't make in that case it does n't really make sense to read them out loud . grad c: but it 'll tell you that this is what 's showing in heidelberg and there you go . grad c: and the presentation agent will go " hhh ! " nuh ? like that the avatar . and then you pick a movie and it show shows you the times and you pick a time and you pick seats and all of this . pretty straightforward . but it 's so this time we are at an advantage because it was a problem for the german system to incorporate all these english movie titles . nuh ? but in english , that 's not really a problem , unless we get some topical german movies that have just come out and that are in their database . so the person may select " huehner rennen " or whatever . grad c: this is very rough but this is what johno and i managed to come up with . the idea here is that grad b: this is the s the schema of the xml here , not an example like that . grad c: this is not an xml this is towards an a schema , definition . the idea is , so , imagine we have a library of schema such as the source - path - goal and then we have forced motion , we have cost action , we have a whole library of schemas . and they 're gon na be , fleshed out in their real ugly detail , source - path - goal , and there 's gon na be s a lot of on the goal and blah - blah , that a goal can be and . what we think is and all the names could should be taken " cum grano salis " . this is a the fact that we 're calling this " action schema " right now should not entail that we are going to continue calling this " action schema " . but what that means is we have here first of all on the in the first iteration a stupid list of source - path - goal actions grad b: actions that can be categorized with or that are related to source - path - goal . grad b: and then those actions can be in multiple categories at the same time if necessary . grad c: exactly . also , these things may or may not get their own structure in the future . so this is something that , may also be a res as a result of your work in the future , we may find out that , there 're really s these subtle differences between even within the domain of entering in the light of a source - path - goal schema , that we need to put in fill in additional structure up there . but it gives us a handle . so with this we can s slaughter the cow any anyway we want . it it is it was a it gave us some headache , how do we avoid writing down that we have the enter source - path - goal that this but this gets the job done in that respect and maybe it is even conceptually somewhat adequate in a sense that we 're talking about two different things . we 're talking more on the intention level , up there , and more on the this is the your basic bone schema , down there . grad b: one question , robert . when you point at the screen is it your shadow that i ' m supposed to look at ? grad b: , what this is that there 's an interface between what we are doing and the action planner grad b: and right now the way the interface is " action go " and then they have the what the person claimed was the source and the person claimed as the goal passed on . and the problem is , is that the current system does not distinguish between goes of type " going into " , goes of type " want to go to a place where take a picture of " , et cetera . grad c: so this is what it looks like now , some simple " go " action from it from an object named " peter 's kirche " of the type " church " to an object named " powder - tower " of the type " tower " . grad g: and is that and tha that 's changeable ? or not ? like are we adapting to it ? or grad c: the input into the action planning , as it is now . and what we are going to do , we going to and you can see here , for johno focus the shadow , we 're gon here you have the action and the domain object and w and on grad c: between here and here , so as you can see this is on one level and we are going to add another " struct " , if you want , ie a rich action description on that level . so in the future grad c: in the future though , the content of a hypothesis will not only be an object and an action and a domain object but an action , a domain object , and a rich action description , grad f: so you had like an action schema and a source - path - goal schema , right ? so how does this source - path - goal schema fit into the action schema ? like is it one of the tags there ? grad b: so the source - path - goal schema in this case , i ' ve if i understand how we described we set this up , cuz we ' ve been arguing about it all week , but we 'll hold the in this case it will hold the features i . i ' m not it 's hard for me to exactly s so that will store the object that is w the source will store the object that we 're going from , the goal will store the f grad b: we 'll fill those in fill those roles in , right ? the s action - schemas have extra see we so those are schemas exist because in case we need extra information instead of just making it an attribute and which is just one thing we decided to make it 's own entity so that we could explode it out later on in case there is some structure that we need to exploit . grad g: ok , so th do n't kn um this is just xml mo notational but the fact that it 's action schema and then slash action schema that 's a whole entit grad c: source is just not spelled out here . source meaning source will be will have a name , a type , maybe a dimensionality , grad c: s source it will be , we 'll f we know a lot about sources so we 'll put all of that in source . but it 's independent whether we are using the spg schema in an enter , view , or approach mode , this is just properties of the spg schema . we can talk about paths being the fastest , the quickest , the nicest and , or and the trajector should be coming in there as . and then g the same about goals . grad g: so i the question is when you actually fill one of these out , it 'll be under action schema ? those are it 's gon na be one y you 'll pick one of those for ok these are this is just a layout of the possible that could go play that role . grad b: and then what actual a action is chosen is will be in the action schema section . grad g: ok , so one question . this was in this case it 's all clear , obvious , but you can think of the enter , view and approach as each having their roles , right ? the it 's implicit that the person that 's moving is doing entering viewing and approaching , but the usual thing is we have bindings between they 're like action specific roles and the more general source - path - goal specific roles . so are we worrying about that or not for now ? grad c: yes , yes . since you bring it up now , we will worry about it . tell us more about it . what do you what do you grad g: what 's that ? i it i may be just reading this and interpreting it into my head in the way that i ' ve always viewed things and that may or may not be what you guys intended . but if it is , then the top block is like , you have to list exactly what x - schema or in this action schema , there 'll be a certain one , that has its own s structure and maybe it has about that specific to entering or viewing or approaching , but those could include roles like the thing that you 're viewing , the thing that you 're entering , the thing that you 're grad g: whatever , that which are think of enter , view and approach as frames and they have frame - specific parameters and roles and you can also describe them in a general way as source - path - goal schema and maybe there 's other image schemas that you could add after this that , how do they work in terms of a force dynamics grad g: or how do they work in f terms of other things . so all of those have f either specific frame specific roles or more general frame specific roles that might have binding . so the question is are how to represent when things are linked in a certain way . so we know for enter that there 's container potentially involved and it 's not i if you wanna have in the same level as the action schema spg schema it 's somewhere in there that you need to represent that there is some container and the interior of it corresponds to some part of the source - path - goal goal i in this case . so is there an easy way in this notation to show when there 's identity between things and i di if that 's something we need to invent or just grad b: i if this answers your question , i was just staring at this while you were talking , a link between the action schema , a field in the s in the schema for the image schemas that would link us to which action schema we were supposed to use so we could grad c: , that 's one thing is that we can link up , think also that we can have one or m as many as we want links from the schema up to the s action description of it . but the notion i got from nancy 's idea was that we may f find concepts floating around i in the a action description of the action f " enter " frame up there that are , e when you talk about the real world , actually identical to the goal of the s source - path - goal schema , grad c: and do we have means of telling it within that a and the answer is . the way we have those means that are even part of the m - three - l a api , grad c: this referencing thing however is of temporary nature because sooner or later the w - three - c will be finished with their x - path , , specification and then it 's going to be even much nicer . then we have real means of pointing at an individual instantiation of one of our elements here and link it to another one , and this not only within a document but also via documents , and all in a v very easy e homogenous framework . grad g: so happen to know how what " sooner or later " means like in practice ? grad c: so it 's g it 's the spec is there and it 's gon na part of the m - three - l ap api filed by the end of this year so that this means we can start using it now . but this is a technical detail . grad b: references from the roles in the schema the bottom schemas to the action schemas is wha i ' m assuming . grad c: personally , i ' m looking even more forward to the day when we 're going to have x forms , which l is a form of notation where it allows you to say that if the spg action up there is enter , then the goal type can never be a statue . grad g: so you have constraints that are dependent on the c actual s specific filler , of some attribute . grad c: - , . w e exactly . , this , does not make sense in light of the statue of liberty , however it is these things are imaginable . grad f: like forced motion and caused action and like you have for spg ? and if so like can are you able to enforce that if it 's spg action then you have that schema , if it 's a forced motion then you have the other schema present in the grad c: we have absolute we have no means of enforcing that , so it would be considered valid if we have an spg action " enter " and no spg schema , but a forced action schema . could happen . grad g: whi - which is not bad , because , that there 's multiple sens that particular case , there 's mult there 's a forced side of that verb as . grad c: it maybe it means we had nothing to say about the source - path - goal . what 's also , and for a i for me in my mind it 's crucially necessary , is that we can have multiple schemas and multiple action schemas in parallel . and we started thinking about going through our bakery questions , so when i say " is there a bakery here ? " i do ultimately want our module to be able to first of all f tell the rest of the system " hey this person actually wants to go there " and " b " , that person actually wants to buy something to eat there . and if these are two different schemas , ie the source - path - goal schema of getting there and then the buying snacks schema , grad g: under so o under action schema there 's a list that can include both things . grad c: ye , they would both schemas would appear , so what is the is there a " buying s snacks " schema ? grad c: so so we would instantiate the spg schema with a source - path - goal blah - blah grad c: and the buying event at which however that looks like , the place f thing to buy . grad g: interesting . would you say that the like you could have a flat structure and just say these are two independent things , but there 's also this like causal , so one is really facilitating the other and it 's part of a compound action of some kind , which has structure . grad c: now it 's technically possible that you can fit schema within schema , and schema within schemata grad g: there are truly times when you have two independent goals that they might express at once , but in this case it 's really like there 's a purpo means that f for achieving some other purpose . grad c: , if i ' m recipient of such a message and i get a source - path - goal where the goal is a bakery and then i get a commercial action which takes place in a bakery , and they are , via identifiers , identified to be the same thing here . grad g: because they 're two different things one of which is l you could think of one a sub pru whatever pre - condition for the second . grad g: so . ok . so there 's like levels of granularity . so there 's a single event of which they are both a part . and they 're independently they are events which have very different characters as far as source - path - goal whatever . so when you identify source - path - goal and whatever , there 's gon na to be a desire , whatever , eating , hunger , whatever other frames you have involved , they have to match up in ways . so it seems like each of them has its own internal structure and mapping to these schemas from the other but that 's just that 's just me . grad c: and as i prefaced it this is the result of one week of arguing about it grad c: i should have we should have added an ano an xml example , or some xml examples grad c: and this is on a on my list of things until next week . it 's also a question of the recursiveness and a hier hierarchy in there . do we want the schemas just blump ? it 's if we can actually get it so that we can , out of one utterance , activate more than one schema , then we 're already pretty good , phd a: you have to be careful with that thing because many actions presuppose some almost infinitely many other actions . so if you go to a bakery you have a general intention of not being hungry . phd a: you have a specific intentions to cross the traffic light to get there . you have a further specific intentions to left to lift your right foot and so y you really have to focus on and decide the level of abstraction that you aim at it zero in on that , and more or less ignore the rest , unless there is some implications that you want to constant draw from sub - tasks that are relevant but very difficult . grad g: m th the other thing that thought of is that you could want to go to the bakery because you 're supposed to meet your friend there or som so you like being able to infer the second thing is very useful and probably often right . grad g: maybe their friend said they were going to meet them in a bakery around the area . and i ' m , i ' m inventing contexts which are maybe unlikely , grad g: but like but it 's still the case that you could override that default by giving extra information which is to me a reason why you would keep the inference of that separate from the knowledge of " ok they really want to know if there 's a bakery around here " , which is direct . grad c: there should never be a hard coded shortcut from the bakery question to the double schema thing , how and , when i have traveled with my friends we make these exactly these kinds of appointments . we o grad g: it 's i met someone at the bakery in the victoria station t train station london before , phd a: so the enter - view - approach the eva , those are fixed slots in this particular action . every action of this kind will have a choice . or or will it just is it change grad e: every spg every spg action either is an enter or a view or an approach , phd a: so so for each particular action that you may want to characterize you would have some number of slots that define in some way what this action is all about . it can be either a , b or c . so is it a fixed number or do you leave it open it could be between one and fifteen it 's flexible . grad c: , the , it depends on if you actually write down the schema then you have to say it 's either one of them or it can be none , or it can be any of them . however the it seems to be sensible to me to r to view them as mutually exclusive maybe even not . phd a: no , no . there a actually by my question is simpler than that , is ok , so you have an spg action and it has three different aspects because you can either enter a building or view it or approach it and touch it . now you define another action , it 's called s s p g - one phd a: action a different action . and this action - two would have various variable possibilities of interpreting what you would like to do . and i in a way similar to either enter - view - approach you may want to send a letter , read a letter , or dictate a letter , let 's say . so , h grad b: the ok maybe i 'd these actions i if i ' m gon na answer your question or not with this , but the categories inside of action schemas , so , spg action is a category . real although what we 're specifying here is this is a category where the actions " enter , view and approach " would fall into because they have a related source - path - goal schema in our tourist domain . cuz viewing in a tourist domain is going up to it and or actually going from one place to another to take a picture , in this in a phd a: , s so it 's automatic derived fr from the structure that is built elsewhere . grad e: this is a cate this a category structure here , action schema . what are some types of action schemas ? one of the types of action schemas is source - path - goal action . and what are some types of that ? and an enter , a view , an approach . those are all source - path - goal actions . grad b: inside of enter there will be roles that can be filled . so if i want to go from outside to inside then you 'd have the roles that need to filled , where you 'd have a source - path - goal set of roles . so you 'd the source would be outside and path is to the door or whatever , so if you wanted to have a new type of action you 'd create a new type of category . then this category would we would put it or not necessarily we would put a new action in the m in the categories that in which it has the , every action has a set of related schemas like source - path - goal or force , whatever , so we would put " write a letter " in the categories that in which it had it w had schemas u grad b: and then later , there the we have a communication event action where we 'd define it down there as grad g: so there 's a bit a redundancy , in which the things that go into a particular you have categories at the top under action schema and the things that go under a particular category are supposed to have a corresponding schema definition for that type . so i what 's the function of having it up there too ? i ' m wondering whether you could just have under action schema you could just say whatever it 's gon na be enter , view or approach or whatever number of things and pos partly because you need to know somewhere that those things fall into some categories . and it may be multiple categories as you say which is the reason why it gets a little messy but if it has if it 's supposed to be categorized in category x then the corresponding schema x will be among the structures that follow . grad c: th this is this r this is more this is probably the way that th that 's the way that seemed more intuitive to johno i grad b: no . the the reason one reason we 're doing it this way is in case there 's extra structure that 's in the enter action that 's not captured by the schemas , grad g: i it 's easy to go back and forth is n't it ? i agree . which is why i would think you would say enter and then just say all the things that are relevant specifically to enter . and then the things that are abstract will be in the abstract things as . and that 's why the bindings become useful . grad e: ri - you 'd like so you 're saying you could practically turn this structure inside out ? , or ? grad c: get get rid of the spg slash something or the sub - actions category , because what does that tell us ? and i agree that this is something we need to discuss , grad g: i what you could say is for enter , you could say " here , list all the kinds of schemas that on the category that grad g: i list all the parent categories " . it 's just like a frame hierarchy , like you have these blended frames . so you would say enter and you 'd say my parent frames are such - and - such , h and then those are the ones that actually you then actually define and say how the roles bind to your specific roles which will probably be f richer and fuller and have other in there . grad g: it could be not a coincidence . like i said , i ' m just hitting everything with a hammer that i developed , but it 's i ' m just telling you what , you just hit the button and it 's like grad e: but there 's a good question here . like , do you when do you need damn this headset ! when you this , grad e: i . like how do i how do i come at this question ? do n't see why you would does th who uses this data structure ? like , do you say " alright i ' m going to do an spg action " . and then somebody ne either the computer or the user says " alright , i know i want to do a source - path - goal action so what are my choices among that ? " and " , ok , so do an enter - view - approach " . it 's not like that , it 's more like you say " i want to , i want to do an enter . " grad e: and then you 're more interested in knowing what the parent categories are of that . right ? so that the representation that you were just talking about seems more relevant to the kinds of things you would have to do ? grad b: i 'd i ' m not if i understand your question . only one of those things are gon na be lit up when we pass this on . so only enter will be if we if our module decided that enter is the case , view and approach will not be there . grad c: it 's it came into my mind that sometimes even two could be on , and would be interesting . nevertheless grad e: mayb - maybe i ' m not understanding where this comes from and where this goes to . grad b: if that 's the case we our i do n't think our system can handle that currently . grad c: because this is exactly the discussion we need . period . no more qualifiers than that . grad c: let 's make a sharper claim . we will not end this discussion anytime soon . grad c: and it 's gon na get more and more complex the l complexer and larger our domains get . grad c: and we will have all of our points in writing pretty soon . so this is about being recorded also . grad b: the r the in terms of why is it 's laid out like this versus some other grad b: that 's a contentious point between the two of us but this is one wa so this is a way to link the way these roles are filled out to the action . grad b: because if we know that enter is a t is an spg action , we know to look for an spg schema and put the appropriate fill in the appropriate roles later on . grad g: and you could have also indicated that by saying " enter , what are the kinds of action i am ? " so there 's just like reverse organization , so like unless @ are there reasons why one is better than the other that come from other sources ? grad c: yes because nobod no the modules do n't this is this is a schema that defines xml messages that are passed from one module to another , mainly meaning from the natural language understanding , or from the deep language understanding to the action planner . now the reason for not using this approach is because you always will have to go back , each module will try have to go back to look up which entity can have which , entity can have which parents , and then so you always need the whole body of y your model to figure out what belongs to what . or you always send it along with it , so you always send up " here i am this person , and have these parents " in every message . which e grad c: it may or may not be a just a pain it 's i ' m completely willing to throw all of this away grad c: and completely redo it , and it after some iterations we may just do that . grad e: i would just like to ask like , if it could happen for next time , just beca cuz i ' m new and i do n't really just what to make of this and what this is for , and like that , so if someone could make an example of what would actually be in it , like first of all what modules are talking to each other using this , grad c: , we i will promise for the next time to have fleshed out n xml examples for a run through and see how this then translates , grad c: including the " miracle occurs here " part . is there more to be said ? in principle what that this approach does , and e whether or not we take the enter - view and we all throw up the ladder wha how do how does professor peter call that ? the hhh , silence su sublimination ? throwing somebody up the stairs ? have you never read the peter 's principle anyone here ? grad f: people reach their level of max their level of at which they 're incompetent or whatever . grad c: ok , so we can promote enter - view all up a bit and get rid of the blah - x - blah asterisk sub - action item altogether . no no problem with that and we w we will play around with all of them but the principal distinction between having the pure schema and their instantiations on the one hand , and adding some whatever , more intention oriented specification on parallel to that this approach seems to be workable to me . if you all share that opinion then that made my day much happier . grad c: i ' m never happy when he uses the word " roles " , i ' m grad c: that 's all i have for today . no , there 's one more issue . bhaskara brought that one up . meeting time rescheduling . grad c: so it looks like you have not been partaking , the monday at three o ' clock time has turned out to be not good anymore . so people have been thinking about an alternative time and the one we came up with is friday two - thirty ? three ? what was it ? grad b: you have class until two , so if we do n't want him to run over here grad c: two - th two - thirty - ish or three or friday at three around that time . grad e: i could do that . earlier on friday is better but three if it were a three or a three thirty time then i would take the three or whatever , but three is fine . grad c: often , no , but , whenever . you are more than welcome if you think that this discussion gets you anywhere in your life then you 're free to c undergrad d: i ' m just glad that i do n't have to work it out because . i ' m just glad that do n't have to work it out myself , that i ' m not involved in the working out of it undergrad d: . that 's why i ' m glad that i ' m not involved in working it out . grad g: and we 'll get the summary like , this the c , short version , like phd a: an - and i would like to second keith 's request . an example wo would be t to have a detailed example . grad g: like have it we 'll have it in writing . or , better , speech . grad b: the other good thing about it is jerry can be on here on friday and he can weigh in as . grad c: and if you can get that binding point also maybe with a example that would be helpful for johno and me . grad c: the binding is technically no problem but it 's it for me it seems to be conceptually important that we find out if we can s if there are things in there that are a general nature , we should distill them out and put them where the schemas are . if there are things that are intention - specific , then we should put them up somewhere , grad g: so it 's gen it 's general across all of these things it 's like shastri would say binding is like an essential cognitive process . so i do n't think it will be isolated to one or the two , but you can definitely figure out where , sometimes things belong and so actually i ' m not i would be curious to see how separate the intention part and the action part are in the system . like i know the whole thing is like intention lattice , like that , so is the ri right now are the ideas the rich the rad or whatever is one potential block inside intention . it 's still it 's still mainly intention hypothesis and then that 's just one way to describe the action part of it . grad g: not just that you want to go from here to here , it 's that the action is what you intend and this action consists of all com complicated modules and image schemas and whatever . grad c: . and and there will be a relatively high level of redundancy in the sense that ultimately one grad c: so th so that if we want to get really cocky we will say " if you really look at it , you just need our rad . " you can throw the rest away , because you 're not gon na get anymore information out of the action a as you find it there in the domain object . but then again in this case , the domain object may contain information that we do n't really care about either . h but w we 'll see that then , and how it evolves . if people really like our rad , w what might happen is that they will get rid of that action thing completely , and leave it up for us to get the parser input grad g: mmm . we know the things that make use of this thing so that we can just change them so that they make use of rad . grad g: i ca n't believe we 're using this term . so i ' m like rad ! like every time i say it , it 's horrible . i see what you mean . grad g: . that 's does n't make it a great term . it 's just like those jokes where you have to work on both levels . grad c: because it e evokes whatever rdf is the biggest thing that 's the rich " resource description framework " grad c: and also so , description , having the word d term " description " in there is wonderful , " rich " is also great , rwww . grad g: and intentions will be " rid " ? like , are the sample data that you guys showed sometime ago like the things maybe you 're gon na run a trial tomorrow . , i ' m just wondering whether the ac some the actual sentences from this domain will be available . cuz it 'd be for me to like look if i ' m thinking about examples i ' m mostly looking at child language which will have some overlap but not total with the kinds of things that you guys are getting . so you showed some in this here before and maybe you ' ve posted it before but where would i look if i want to see ? grad c: , just transcript is just not available because nobody has transcribed it yet . e i 'll transcribe it though . grad g: ok , do n't make it a high priority i if you just tell me like two examples , y the the representational problems are i ' m , will be there , like enough for me to think about . grad c: ok , so friday , whoever wants and comes , and can . this friday .
the data collection script has been slightly modified , so that it encourages more natural dialogue between the subjects and the "wizard". another trial run will take place , while a call to recruit subjects is being emailed to students. meanwhile , the translation of the tv and cinema information system to english is almost complete. this was the basic requirement of the project. on the other hand , there was a presentation of the model that offers more elaborate action planning for smartkom , of which enter/view/approach ( eva ) modes are a part. these modes will form categories of complete xml schemas with information filled in from the language understanding in a more elaborate way than the current object-"go action"-object model. these categories will , in turn , be linked with action schemas , one of which is source-path-goal ( spg ). categories and action schemas can have -in theory- any number of blocks depending on the expansion of the domain. the notation provides for linking and referencing between different schemas. the model also allows for multiple action schemas to be triggered in parallel. however , the structure of the model is open for discussion , since its use was to elicit discussion and highlight issues. as the data collection is about to start , a call for the recruitment of subjects is going to be sent out. the main pool of subjects is going to be the student community in the institute. along with the "wizard" , who is going to be an integral part of the experiments , another person needs to be hired as the instructor for the tasks involved in them. meetings were rescheduled and are now going to take place on fridays. for the next meeting , there is going to be a presentation of the modifications in the parser module of the basic system. additionally , the proposed xml model will be put to the test with concrete data. similarly , such examples will clarify issues relating to the binding and redundancy of features with common characteristics amongst the shcemas ( eg "container" for enter and "goal" for spg ). subjects in the trial runs of the experiment were given detailed descriptions of the tasks , which led to the subsequent dialogue being a re-iteration or re-phrasing of the instructions. using pictures instead would be one way to deal with the problem , however , it was deemed too laborious and it would divert the focus of the experiment. as the original action planner of the smartkom system only included a generic spg schema , a new module was presented that allows for variety in the user intentions to be included. this being only a model , there are several issues that will need to be clarified in the future. how the model deals with redundancy of information among categories and action schemas , and whether a flat or a hierarchical model would be preferable are two of them. what is also clear is that as the domain of research broadens beyond the study of eva modes , the complexity of the model will also increase. another trial run of the data collection experiment is to take place , while subjects are being recruited. there have been some adjustments in the script. the prior description of tasks the subjects are going to be given is now going to be more schematic , although the intentions are still going to be clear. the lack of detailed , written explanation will hopefully encourage more natural and varied dialogue between subjects and "wizard". on the other hand , the generator module of the system has been translated from german. eventually , a user is going to be able to request and receive tv- and cinema-related information in english. this will satisfy the basic project requirements. the model of a new module for smartkom was presented. it is an interface between the language understanding and the action planning modules. one layer of xml schemas creates a richer representation of the linguistic analysis , which is subsequently used to trigger one or more action schemas. the model keeps the concept of xml messages being sent between the modules of the system , although it is open-ended as to the number of schemas involved.
###dialogue: grad g: hi . we ' ve met before , like , i remember talking to you about aspect like that at some point or other . grad c: so . for those who everyone knows me , this is great . , apart from that , the old gang , johno and bhaskara have been with us from day one grad c: and they 're engaged in various activities , some of which you will hear about today . ami is our counselor and spiritual guidance and also interested in problems concerning reference of the more complex type , and he sits in as a interested participant and helper . is that a good characterization ? grad c: and hopefully it is by means of keith that we will be able to get a b a better formal and a better semantic idea of what a construction is and how we can make it work for us . additionally his interest surpasses english because it also entails german , an extra capability of speaking and writing and understanding and reading that language . and , is there anyone who does n't know nancy ? do you do nancy ? grad g: i did n't know . i did n't mean to be humor copying , but ok , yes , i know myself . it 's ok . it 's a grad c: officially , but in reality already much longer and next to some more or less bureaucratic with the data collection she 's also the wizard in the data collection , grad c: ok , why do n't we get started on that subject anyways . , so we 're about to collect data and the s the following things have happened since we last met . when will we three meet again ? and grad c: what happened is that , " a " , there was some confusion between you and jerry with the that leading to your talking to catherine snow , and he was he completely that some something confusing happened . his idea was to get the l the lists of mayors of the department , the students . it it 's exactly how you interpreted it , s grad c: majors and just sending the little write - up that we did on to those email lists undergrad d: ok . , . but so it was really carol snow who was confused , not me and not jerry . undergrad d: that 's good . so i should still do that . and using the thing that you wrote up . grad c: wonderful . and we have a little description of asking peop subjects to contact fey for recruiting them for our thing and there was some confusion as to the consent form , which is that what you just signed grad g: did jerry talk to you about maybe using our class ? the students in the undergrad class that he 's teaching ? grad c: however there is always more people in a facul in a department than are just taking his class or anybody else 's class at the moment and one should reach out and try and get them all . grad g: ok , but th i it 's that people in his class cover a different set so than the c is the cogsci department that you were talking about ? undergrad d: that 's what i suggested to him , that people like jerry and george and et cetera just grad g: so if there 's something that you 're sending out you can also s send me a copy , me or bhaskara could either of us could post it to is it grad g: if it 's a general solicitation that is just contact you then we can pro post it to the news group so . grad c: how however i suggest that if you look at your email carefully you may think you may find that you already have it . grad c: anyhow , the , not only also we will talk about linguistics and computer science . and then , secondly , we had , you may remember , the problem with the re - phrasing , that subject always re - phrase the task that we gave them , grad c: and so we had a meeting on friday talking about how to avoid that , and it proved finally fruitful in the sense that we came up with a new scenario for how to get the subject m to really have intentions and to act upon those , there the idea is now that next actually we need to hire one more person to actually do that job because it 's getting more complicated . so if anyone interested in what i ' m about to describe , tell that person to write a mail to me or jerry soon , fast . the idea now is to come up with a high level of abstract tasks go shopping " take in a batch of art " visit bed006cdialogueact252 437215 438285 c grad s : s -1 0 do some sightseeing blah - blah , analogous to what fey has started in compiling here and already she has already gone to the trouble of anchoring it with specific o entities and real world places you will find in heidelberg . and . so out of these f s these high level categories the subject can pick a couple , such as if there is a cop a category in emptying your roll of film , the person can then decide " ok , i wanna do that at this place " , make up their own itinerary a and tasks and the person is not allowed to take this h high level category list with them , but the person is able to take notes on a map that we will give him and the map will be a tourist 's schematic representation with symbols for the objects . and so , the person can maybe make a mental note that " i wanted to go shopping here " and " i wanted to maybe take a picture of that " and " maybe eat here " and then goes in and solves the task with the system , ie fey , and we 're gon na try out that any questions ? grad g: so y you 'll have those say somewhere what their intention was so you still have the thing about having data where what the actual intention was ? grad g: but they will there 's nothing that says " these are the things you want to do " so they 'll say " these are the things i want to do " and right , so they 'll have a little bit more natural interaction ? grad f: so they 'll be given this map , which means that they wo n't have to like ask the system for in for like high level information about where things are ? grad c: it 's a schematic tourist map . so it 'll be i it 'll still require the that information and an grad g: it w it does n't have like streets on it that would allow them to figure out their way grad e: so you 're just saying like what part of town the things are in or whatever ? grad c: a the map is more a means for them to have the buildings and their names and maybe some ma major streets and their names and we want to maybe ask them , if you have get it isolated street the , whatever , river street , and they know that they have decided that , yes , that 's where they want to do this action that they have it with them and they can actually read them or have the label for the object because it 's too hard to memorize all these st strange german names . and then we 're going to have another we 're gon na have w another trial run ie the first with that new setup tomorrow at two and we have a real interesting subject which is ron for who those who know him , he 's the founder of ici . so he 'll he 's around seven seventy years old , . grad c: and he also approached me and he offered to help our project and he was more thinking about some high level thinking tasks and i said " we need help you can come in as a subject " and he said " ok " . so that 's what 's gon na happen , tomorrow , data . grad c: new new set up . which i 'll hopefully scrape together t but , to fey , we already have a blueprint and work with that . questions ? comments on that ? if not , we can move on . no ? no more questions ? grad g: like so so it 's just based on like the materials you had about heidelberg . grad c: talk to a machine and it breaks down and then the human comes on . the question is just how do we get the tasks in their head that they have an intention of doing something and have a need to ask the system for something without giving them a clear wording or phrasing of the task . grad c: because what will happen then is that people repeat , or as much as they can , of that phrasing . grad g: , are you worried about being able to identify the goals that we ' ve d you guys have been talking about are this these identifying which of three modes their question concerns . so it 's like the enter versus view grad c: , we will get a protocol of the prior interaction , right ? that 's where the instructor , the person we are going to hire , and the subjects sit down together with these high level things and so th the q first question for the subject is , " so these are things , we thought a tourist can do . is there anything that interests you ? " and the person can say " , sh this is something i would do . i would go shopping " . and then we can this s instructor can say " , then you may want to find out how to get over here because this is where the shopping district is " . grad g: so the interaction beforehand will give them hints about how specific or how whatever though the kinds of questions that are going to ask during the actual session ? grad c: no . just ok , what would you like to buy and then ok there you wanna buy a whatever cuckoos clocks ok and the there is a store there . grad c: so the task then for that person is t finding out how to get there , that 's what 's left . and we know that the intention is to enter because we know that the person wants to buy a cuckoos clock . grad g: ok , that 's what so like those tasks are all gon na be unambiguous about which of the three modes . right . ok . phd a: , so the idea is to try to get the actual phrasing that they might use and try to interfere as little as possible with their choice of words . grad c: yes . in a sense that 's exactly the idea , which is never possible in a s in a lab situation , phd a: , u the one experiment th that i ' ve read somewhere , it was they u used pictures . grad c: we had exactly that on our list of possible way things so we i even made a silly thing how that could work , how you control you are here you want to know how to get someplace , and this is the place and it 's a museum and you want to do some and there 's a person looking at pictures . so , this is exactly getting someplace with the intention of entering and looking at pictures . grad c: however , not only was the common census were among all participants of friday 's meeting was it 's gon na be very laborious to make these drawings for each different things , all the different actions , if possible , and also people will get caught up in the pictures . so all of a sudden we 'll get descriptions of pictures in there . and people talking about pictures and pictorial representations i would s i would still be willing to try it . phd a: , i ' m not saying it 's necessary but i you might be able to combine text and some picture and also it will be a good idea to show them the text and chew the task and then take the test away the text away so that they are not guided by what you wrote , grad c: they will have no more linguistic matter in front of them when they enter this room . then i suggest we move on to the to we have the edu project , let me make one more general remark , has two side actions , its action items that we 're do dealing with , one is modifying the smartkom parser and the other one is modifying the smartkom natural language generation module . and this is not too complicated but i ' m just mentioning it put it in the framework because this is something we will talk about now . , i have some news from the generation , do you have news from the parser ? grad f: yes , i would really p it would be better if i talked about it on friday . if that 's ok . grad c: wonderful . , did you run into problems or did you run into not h having time ? grad c: and i do have some good news for the natural language generation however . and the good news is i it 's done . , meaning that tilman becker , who does the german one , actually took out some time and already did it in english for us . and so the version he 's sending us is already producing the english that 's needed to get by in version one point one . grad f: so i take it that was similar to the what we did for the parsing ? grad c: i it even though the generator is a little bit more complex and it would have been , not changing one hundred words but maybe four hundred words , but it would have been but this is i good news , and the time and especially bhaskara and do i have it here ? the time is now fixed . it 's the last week of april until the fourth of may so it 's twenty - sixth through fourth . that they 'll be here . so it 's extremely important that the two of you are also present in this town during that time . grad g: w it does n't really have much meaning to grad students but final projects might . grad b: i 'll be here working on something . guaranteed , it 's just will i be here , in i 'll be here too actually but grad c: and ask them more questions and sit down together and write sensible code and they can give some talks and . but grad b: but it 's not like we need to be with them twenty - four hours a day s for the seven days that they 're here . grad c: not unless you really want to . and they 're both guys so you may want to . ok , that much from the parser and generator side , unless there are more questions on that . grad c: and i was completely flabbergasted here and i and that 's also it 's going to produce the concept - to - speech blah - blah information for necessary for one point one in english based on the english , in english . i was like " ok , grad g: so that was like one of the first l , the first task was getting it working for english . so that 's over now . is that right ? so the basic requirement fulfilled . grad c: , the basic requirement is fulfilled almost . when andreas stolcke and his gang , when they have changed the language model of the recognizer and the dictionary , then we can actually a put it all together grad c: and then when if something actually happens and some answers come out , then we 're done . grad g: are they is it using the database ? the german tv movie . so all the actual data might be german names ? grad c: the ok , so you how the german dialogue the german the demo dialogue actually works . it works the first thing is what 's , showing on tv , and then the person is presented with what 's running on tv in germany on that day , on that evening grad c: and so you take one look at it and then you say " that 's really nothing there 's nothing for me there " what 's running in the cinemas ? so maybe there 's something better happening there . and then you get you 're shown what movies play which films , and it 's gon na be all the heidelberg movies and what films they are actually showing . and most of them are going to be hollywood movies . so , " american beauty " is " american beauty " , grad g: it 's a so would the generator , like the english language sentence of it is " these are the follow the following films are being shown " like that ? grad c: , but it in that sense it does n't make in that case it does n't really make sense to read them out loud . grad c: but it 'll tell you that this is what 's showing in heidelberg and there you go . grad c: and the presentation agent will go " hhh ! " nuh ? like that the avatar . and then you pick a movie and it show shows you the times and you pick a time and you pick seats and all of this . pretty straightforward . but it 's so this time we are at an advantage because it was a problem for the german system to incorporate all these english movie titles . nuh ? but in english , that 's not really a problem , unless we get some topical german movies that have just come out and that are in their database . so the person may select " huehner rennen " or whatever . grad c: this is very rough but this is what johno and i managed to come up with . the idea here is that grad b: this is the s the schema of the xml here , not an example like that . grad c: this is not an xml this is towards an a schema , definition . the idea is , so , imagine we have a library of schema such as the source - path - goal and then we have forced motion , we have cost action , we have a whole library of schemas . and they 're gon na be , fleshed out in their real ugly detail , source - path - goal , and there 's gon na be s a lot of on the goal and blah - blah , that a goal can be and . what we think is and all the names could should be taken " cum grano salis " . this is a the fact that we 're calling this " action schema " right now should not entail that we are going to continue calling this " action schema " . but what that means is we have here first of all on the in the first iteration a stupid list of source - path - goal actions grad b: actions that can be categorized with or that are related to source - path - goal . grad b: and then those actions can be in multiple categories at the same time if necessary . grad c: exactly . also , these things may or may not get their own structure in the future . so this is something that , may also be a res as a result of your work in the future , we may find out that , there 're really s these subtle differences between even within the domain of entering in the light of a source - path - goal schema , that we need to put in fill in additional structure up there . but it gives us a handle . so with this we can s slaughter the cow any anyway we want . it it is it was a it gave us some headache , how do we avoid writing down that we have the enter source - path - goal that this but this gets the job done in that respect and maybe it is even conceptually somewhat adequate in a sense that we 're talking about two different things . we 're talking more on the intention level , up there , and more on the this is the your basic bone schema , down there . grad b: one question , robert . when you point at the screen is it your shadow that i ' m supposed to look at ? grad b: , what this is that there 's an interface between what we are doing and the action planner grad b: and right now the way the interface is " action go " and then they have the what the person claimed was the source and the person claimed as the goal passed on . and the problem is , is that the current system does not distinguish between goes of type " going into " , goes of type " want to go to a place where take a picture of " , et cetera . grad c: so this is what it looks like now , some simple " go " action from it from an object named " peter 's kirche " of the type " church " to an object named " powder - tower " of the type " tower " . grad g: and is that and tha that 's changeable ? or not ? like are we adapting to it ? or grad c: the input into the action planning , as it is now . and what we are going to do , we going to and you can see here , for johno focus the shadow , we 're gon here you have the action and the domain object and w and on grad c: between here and here , so as you can see this is on one level and we are going to add another " struct " , if you want , ie a rich action description on that level . so in the future grad c: in the future though , the content of a hypothesis will not only be an object and an action and a domain object but an action , a domain object , and a rich action description , grad f: so you had like an action schema and a source - path - goal schema , right ? so how does this source - path - goal schema fit into the action schema ? like is it one of the tags there ? grad b: so the source - path - goal schema in this case , i ' ve if i understand how we described we set this up , cuz we ' ve been arguing about it all week , but we 'll hold the in this case it will hold the features i . i ' m not it 's hard for me to exactly s so that will store the object that is w the source will store the object that we 're going from , the goal will store the f grad b: we 'll fill those in fill those roles in , right ? the s action - schemas have extra see we so those are schemas exist because in case we need extra information instead of just making it an attribute and which is just one thing we decided to make it 's own entity so that we could explode it out later on in case there is some structure that we need to exploit . grad g: ok , so th do n't kn um this is just xml mo notational but the fact that it 's action schema and then slash action schema that 's a whole entit grad c: source is just not spelled out here . source meaning source will be will have a name , a type , maybe a dimensionality , grad c: s source it will be , we 'll f we know a lot about sources so we 'll put all of that in source . but it 's independent whether we are using the spg schema in an enter , view , or approach mode , this is just properties of the spg schema . we can talk about paths being the fastest , the quickest , the nicest and , or and the trajector should be coming in there as . and then g the same about goals . grad g: so i the question is when you actually fill one of these out , it 'll be under action schema ? those are it 's gon na be one y you 'll pick one of those for ok these are this is just a layout of the possible that could go play that role . grad b: and then what actual a action is chosen is will be in the action schema section . grad g: ok , so one question . this was in this case it 's all clear , obvious , but you can think of the enter , view and approach as each having their roles , right ? the it 's implicit that the person that 's moving is doing entering viewing and approaching , but the usual thing is we have bindings between they 're like action specific roles and the more general source - path - goal specific roles . so are we worrying about that or not for now ? grad c: yes , yes . since you bring it up now , we will worry about it . tell us more about it . what do you what do you grad g: what 's that ? i it i may be just reading this and interpreting it into my head in the way that i ' ve always viewed things and that may or may not be what you guys intended . but if it is , then the top block is like , you have to list exactly what x - schema or in this action schema , there 'll be a certain one , that has its own s structure and maybe it has about that specific to entering or viewing or approaching , but those could include roles like the thing that you 're viewing , the thing that you 're entering , the thing that you 're grad g: whatever , that which are think of enter , view and approach as frames and they have frame - specific parameters and roles and you can also describe them in a general way as source - path - goal schema and maybe there 's other image schemas that you could add after this that , how do they work in terms of a force dynamics grad g: or how do they work in f terms of other things . so all of those have f either specific frame specific roles or more general frame specific roles that might have binding . so the question is are how to represent when things are linked in a certain way . so we know for enter that there 's container potentially involved and it 's not i if you wanna have in the same level as the action schema spg schema it 's somewhere in there that you need to represent that there is some container and the interior of it corresponds to some part of the source - path - goal goal i in this case . so is there an easy way in this notation to show when there 's identity between things and i di if that 's something we need to invent or just grad b: i if this answers your question , i was just staring at this while you were talking , a link between the action schema , a field in the s in the schema for the image schemas that would link us to which action schema we were supposed to use so we could grad c: , that 's one thing is that we can link up , think also that we can have one or m as many as we want links from the schema up to the s action description of it . but the notion i got from nancy 's idea was that we may f find concepts floating around i in the a action description of the action f " enter " frame up there that are , e when you talk about the real world , actually identical to the goal of the s source - path - goal schema , grad c: and do we have means of telling it within that a and the answer is . the way we have those means that are even part of the m - three - l a api , grad c: this referencing thing however is of temporary nature because sooner or later the w - three - c will be finished with their x - path , , specification and then it 's going to be even much nicer . then we have real means of pointing at an individual instantiation of one of our elements here and link it to another one , and this not only within a document but also via documents , and all in a v very easy e homogenous framework . grad g: so happen to know how what " sooner or later " means like in practice ? grad c: so it 's g it 's the spec is there and it 's gon na part of the m - three - l ap api filed by the end of this year so that this means we can start using it now . but this is a technical detail . grad b: references from the roles in the schema the bottom schemas to the action schemas is wha i ' m assuming . grad c: personally , i ' m looking even more forward to the day when we 're going to have x forms , which l is a form of notation where it allows you to say that if the spg action up there is enter , then the goal type can never be a statue . grad g: so you have constraints that are dependent on the c actual s specific filler , of some attribute . grad c: - , . w e exactly . , this , does not make sense in light of the statue of liberty , however it is these things are imaginable . grad f: like forced motion and caused action and like you have for spg ? and if so like can are you able to enforce that if it 's spg action then you have that schema , if it 's a forced motion then you have the other schema present in the grad c: we have absolute we have no means of enforcing that , so it would be considered valid if we have an spg action " enter " and no spg schema , but a forced action schema . could happen . grad g: whi - which is not bad , because , that there 's multiple sens that particular case , there 's mult there 's a forced side of that verb as . grad c: it maybe it means we had nothing to say about the source - path - goal . what 's also , and for a i for me in my mind it 's crucially necessary , is that we can have multiple schemas and multiple action schemas in parallel . and we started thinking about going through our bakery questions , so when i say " is there a bakery here ? " i do ultimately want our module to be able to first of all f tell the rest of the system " hey this person actually wants to go there " and " b " , that person actually wants to buy something to eat there . and if these are two different schemas , ie the source - path - goal schema of getting there and then the buying snacks schema , grad g: under so o under action schema there 's a list that can include both things . grad c: ye , they would both schemas would appear , so what is the is there a " buying s snacks " schema ? grad c: so so we would instantiate the spg schema with a source - path - goal blah - blah grad c: and the buying event at which however that looks like , the place f thing to buy . grad g: interesting . would you say that the like you could have a flat structure and just say these are two independent things , but there 's also this like causal , so one is really facilitating the other and it 's part of a compound action of some kind , which has structure . grad c: now it 's technically possible that you can fit schema within schema , and schema within schemata grad g: there are truly times when you have two independent goals that they might express at once , but in this case it 's really like there 's a purpo means that f for achieving some other purpose . grad c: , if i ' m recipient of such a message and i get a source - path - goal where the goal is a bakery and then i get a commercial action which takes place in a bakery , and they are , via identifiers , identified to be the same thing here . grad g: because they 're two different things one of which is l you could think of one a sub pru whatever pre - condition for the second . grad g: so . ok . so there 's like levels of granularity . so there 's a single event of which they are both a part . and they 're independently they are events which have very different characters as far as source - path - goal whatever . so when you identify source - path - goal and whatever , there 's gon na to be a desire , whatever , eating , hunger , whatever other frames you have involved , they have to match up in ways . so it seems like each of them has its own internal structure and mapping to these schemas from the other but that 's just that 's just me . grad c: and as i prefaced it this is the result of one week of arguing about it grad c: i should have we should have added an ano an xml example , or some xml examples grad c: and this is on a on my list of things until next week . it 's also a question of the recursiveness and a hier hierarchy in there . do we want the schemas just blump ? it 's if we can actually get it so that we can , out of one utterance , activate more than one schema , then we 're already pretty good , phd a: you have to be careful with that thing because many actions presuppose some almost infinitely many other actions . so if you go to a bakery you have a general intention of not being hungry . phd a: you have a specific intentions to cross the traffic light to get there . you have a further specific intentions to left to lift your right foot and so y you really have to focus on and decide the level of abstraction that you aim at it zero in on that , and more or less ignore the rest , unless there is some implications that you want to constant draw from sub - tasks that are relevant but very difficult . grad g: m th the other thing that thought of is that you could want to go to the bakery because you 're supposed to meet your friend there or som so you like being able to infer the second thing is very useful and probably often right . grad g: maybe their friend said they were going to meet them in a bakery around the area . and i ' m , i ' m inventing contexts which are maybe unlikely , grad g: but like but it 's still the case that you could override that default by giving extra information which is to me a reason why you would keep the inference of that separate from the knowledge of " ok they really want to know if there 's a bakery around here " , which is direct . grad c: there should never be a hard coded shortcut from the bakery question to the double schema thing , how and , when i have traveled with my friends we make these exactly these kinds of appointments . we o grad g: it 's i met someone at the bakery in the victoria station t train station london before , phd a: so the enter - view - approach the eva , those are fixed slots in this particular action . every action of this kind will have a choice . or or will it just is it change grad e: every spg every spg action either is an enter or a view or an approach , phd a: so so for each particular action that you may want to characterize you would have some number of slots that define in some way what this action is all about . it can be either a , b or c . so is it a fixed number or do you leave it open it could be between one and fifteen it 's flexible . grad c: , the , it depends on if you actually write down the schema then you have to say it 's either one of them or it can be none , or it can be any of them . however the it seems to be sensible to me to r to view them as mutually exclusive maybe even not . phd a: no , no . there a actually by my question is simpler than that , is ok , so you have an spg action and it has three different aspects because you can either enter a building or view it or approach it and touch it . now you define another action , it 's called s s p g - one phd a: action a different action . and this action - two would have various variable possibilities of interpreting what you would like to do . and i in a way similar to either enter - view - approach you may want to send a letter , read a letter , or dictate a letter , let 's say . so , h grad b: the ok maybe i 'd these actions i if i ' m gon na answer your question or not with this , but the categories inside of action schemas , so , spg action is a category . real although what we 're specifying here is this is a category where the actions " enter , view and approach " would fall into because they have a related source - path - goal schema in our tourist domain . cuz viewing in a tourist domain is going up to it and or actually going from one place to another to take a picture , in this in a phd a: , s so it 's automatic derived fr from the structure that is built elsewhere . grad e: this is a cate this a category structure here , action schema . what are some types of action schemas ? one of the types of action schemas is source - path - goal action . and what are some types of that ? and an enter , a view , an approach . those are all source - path - goal actions . grad b: inside of enter there will be roles that can be filled . so if i want to go from outside to inside then you 'd have the roles that need to filled , where you 'd have a source - path - goal set of roles . so you 'd the source would be outside and path is to the door or whatever , so if you wanted to have a new type of action you 'd create a new type of category . then this category would we would put it or not necessarily we would put a new action in the m in the categories that in which it has the , every action has a set of related schemas like source - path - goal or force , whatever , so we would put " write a letter " in the categories that in which it had it w had schemas u grad b: and then later , there the we have a communication event action where we 'd define it down there as grad g: so there 's a bit a redundancy , in which the things that go into a particular you have categories at the top under action schema and the things that go under a particular category are supposed to have a corresponding schema definition for that type . so i what 's the function of having it up there too ? i ' m wondering whether you could just have under action schema you could just say whatever it 's gon na be enter , view or approach or whatever number of things and pos partly because you need to know somewhere that those things fall into some categories . and it may be multiple categories as you say which is the reason why it gets a little messy but if it has if it 's supposed to be categorized in category x then the corresponding schema x will be among the structures that follow . grad c: th this is this r this is more this is probably the way that th that 's the way that seemed more intuitive to johno i grad b: no . the the reason one reason we 're doing it this way is in case there 's extra structure that 's in the enter action that 's not captured by the schemas , grad g: i it 's easy to go back and forth is n't it ? i agree . which is why i would think you would say enter and then just say all the things that are relevant specifically to enter . and then the things that are abstract will be in the abstract things as . and that 's why the bindings become useful . grad e: ri - you 'd like so you 're saying you could practically turn this structure inside out ? , or ? grad c: get get rid of the spg slash something or the sub - actions category , because what does that tell us ? and i agree that this is something we need to discuss , grad g: i what you could say is for enter , you could say " here , list all the kinds of schemas that on the category that grad g: i list all the parent categories " . it 's just like a frame hierarchy , like you have these blended frames . so you would say enter and you 'd say my parent frames are such - and - such , h and then those are the ones that actually you then actually define and say how the roles bind to your specific roles which will probably be f richer and fuller and have other in there . grad g: it could be not a coincidence . like i said , i ' m just hitting everything with a hammer that i developed , but it 's i ' m just telling you what , you just hit the button and it 's like grad e: but there 's a good question here . like , do you when do you need damn this headset ! when you this , grad e: i . like how do i how do i come at this question ? do n't see why you would does th who uses this data structure ? like , do you say " alright i ' m going to do an spg action " . and then somebody ne either the computer or the user says " alright , i know i want to do a source - path - goal action so what are my choices among that ? " and " , ok , so do an enter - view - approach " . it 's not like that , it 's more like you say " i want to , i want to do an enter . " grad e: and then you 're more interested in knowing what the parent categories are of that . right ? so that the representation that you were just talking about seems more relevant to the kinds of things you would have to do ? grad b: i 'd i ' m not if i understand your question . only one of those things are gon na be lit up when we pass this on . so only enter will be if we if our module decided that enter is the case , view and approach will not be there . grad c: it 's it came into my mind that sometimes even two could be on , and would be interesting . nevertheless grad e: mayb - maybe i ' m not understanding where this comes from and where this goes to . grad b: if that 's the case we our i do n't think our system can handle that currently . grad c: because this is exactly the discussion we need . period . no more qualifiers than that . grad c: let 's make a sharper claim . we will not end this discussion anytime soon . grad c: and it 's gon na get more and more complex the l complexer and larger our domains get . grad c: and we will have all of our points in writing pretty soon . so this is about being recorded also . grad b: the r the in terms of why is it 's laid out like this versus some other grad b: that 's a contentious point between the two of us but this is one wa so this is a way to link the way these roles are filled out to the action . grad b: because if we know that enter is a t is an spg action , we know to look for an spg schema and put the appropriate fill in the appropriate roles later on . grad g: and you could have also indicated that by saying " enter , what are the kinds of action i am ? " so there 's just like reverse organization , so like unless @ are there reasons why one is better than the other that come from other sources ? grad c: yes because nobod no the modules do n't this is this is a schema that defines xml messages that are passed from one module to another , mainly meaning from the natural language understanding , or from the deep language understanding to the action planner . now the reason for not using this approach is because you always will have to go back , each module will try have to go back to look up which entity can have which , entity can have which parents , and then so you always need the whole body of y your model to figure out what belongs to what . or you always send it along with it , so you always send up " here i am this person , and have these parents " in every message . which e grad c: it may or may not be a just a pain it 's i ' m completely willing to throw all of this away grad c: and completely redo it , and it after some iterations we may just do that . grad e: i would just like to ask like , if it could happen for next time , just beca cuz i ' m new and i do n't really just what to make of this and what this is for , and like that , so if someone could make an example of what would actually be in it , like first of all what modules are talking to each other using this , grad c: , we i will promise for the next time to have fleshed out n xml examples for a run through and see how this then translates , grad c: including the " miracle occurs here " part . is there more to be said ? in principle what that this approach does , and e whether or not we take the enter - view and we all throw up the ladder wha how do how does professor peter call that ? the hhh , silence su sublimination ? throwing somebody up the stairs ? have you never read the peter 's principle anyone here ? grad f: people reach their level of max their level of at which they 're incompetent or whatever . grad c: ok , so we can promote enter - view all up a bit and get rid of the blah - x - blah asterisk sub - action item altogether . no no problem with that and we w we will play around with all of them but the principal distinction between having the pure schema and their instantiations on the one hand , and adding some whatever , more intention oriented specification on parallel to that this approach seems to be workable to me . if you all share that opinion then that made my day much happier . grad c: i ' m never happy when he uses the word " roles " , i ' m grad c: that 's all i have for today . no , there 's one more issue . bhaskara brought that one up . meeting time rescheduling . grad c: so it looks like you have not been partaking , the monday at three o ' clock time has turned out to be not good anymore . so people have been thinking about an alternative time and the one we came up with is friday two - thirty ? three ? what was it ? grad b: you have class until two , so if we do n't want him to run over here grad c: two - th two - thirty - ish or three or friday at three around that time . grad e: i could do that . earlier on friday is better but three if it were a three or a three thirty time then i would take the three or whatever , but three is fine . grad c: often , no , but , whenever . you are more than welcome if you think that this discussion gets you anywhere in your life then you 're free to c undergrad d: i ' m just glad that i do n't have to work it out because . i ' m just glad that do n't have to work it out myself , that i ' m not involved in the working out of it undergrad d: . that 's why i ' m glad that i ' m not involved in working it out . grad g: and we 'll get the summary like , this the c , short version , like phd a: an - and i would like to second keith 's request . an example wo would be t to have a detailed example . grad g: like have it we 'll have it in writing . or , better , speech . grad b: the other good thing about it is jerry can be on here on friday and he can weigh in as . grad c: and if you can get that binding point also maybe with a example that would be helpful for johno and me . grad c: the binding is technically no problem but it 's it for me it seems to be conceptually important that we find out if we can s if there are things in there that are a general nature , we should distill them out and put them where the schemas are . if there are things that are intention - specific , then we should put them up somewhere , grad g: so it 's gen it 's general across all of these things it 's like shastri would say binding is like an essential cognitive process . so i do n't think it will be isolated to one or the two , but you can definitely figure out where , sometimes things belong and so actually i ' m not i would be curious to see how separate the intention part and the action part are in the system . like i know the whole thing is like intention lattice , like that , so is the ri right now are the ideas the rich the rad or whatever is one potential block inside intention . it 's still it 's still mainly intention hypothesis and then that 's just one way to describe the action part of it . grad g: not just that you want to go from here to here , it 's that the action is what you intend and this action consists of all com complicated modules and image schemas and whatever . grad c: . and and there will be a relatively high level of redundancy in the sense that ultimately one grad c: so th so that if we want to get really cocky we will say " if you really look at it , you just need our rad . " you can throw the rest away , because you 're not gon na get anymore information out of the action a as you find it there in the domain object . but then again in this case , the domain object may contain information that we do n't really care about either . h but w we 'll see that then , and how it evolves . if people really like our rad , w what might happen is that they will get rid of that action thing completely , and leave it up for us to get the parser input grad g: mmm . we know the things that make use of this thing so that we can just change them so that they make use of rad . grad g: i ca n't believe we 're using this term . so i ' m like rad ! like every time i say it , it 's horrible . i see what you mean . grad g: . that 's does n't make it a great term . it 's just like those jokes where you have to work on both levels . grad c: because it e evokes whatever rdf is the biggest thing that 's the rich " resource description framework " grad c: and also so , description , having the word d term " description " in there is wonderful , " rich " is also great , rwww . grad g: and intentions will be " rid " ? like , are the sample data that you guys showed sometime ago like the things maybe you 're gon na run a trial tomorrow . , i ' m just wondering whether the ac some the actual sentences from this domain will be available . cuz it 'd be for me to like look if i ' m thinking about examples i ' m mostly looking at child language which will have some overlap but not total with the kinds of things that you guys are getting . so you showed some in this here before and maybe you ' ve posted it before but where would i look if i want to see ? grad c: , just transcript is just not available because nobody has transcribed it yet . e i 'll transcribe it though . grad g: ok , do n't make it a high priority i if you just tell me like two examples , y the the representational problems are i ' m , will be there , like enough for me to think about . grad c: ok , so friday , whoever wants and comes , and can . this friday . ###summary: the data collection script has been slightly modified , so that it encourages more natural dialogue between the subjects and the "wizard". another trial run will take place , while a call to recruit subjects is being emailed to students. meanwhile , the translation of the tv and cinema information system to english is almost complete. this was the basic requirement of the project. on the other hand , there was a presentation of the model that offers more elaborate action planning for smartkom , of which enter/view/approach ( eva ) modes are a part. these modes will form categories of complete xml schemas with information filled in from the language understanding in a more elaborate way than the current object-"go action"-object model. these categories will , in turn , be linked with action schemas , one of which is source-path-goal ( spg ). categories and action schemas can have -in theory- any number of blocks depending on the expansion of the domain. the notation provides for linking and referencing between different schemas. the model also allows for multiple action schemas to be triggered in parallel. however , the structure of the model is open for discussion , since its use was to elicit discussion and highlight issues. as the data collection is about to start , a call for the recruitment of subjects is going to be sent out. the main pool of subjects is going to be the student community in the institute. along with the "wizard" , who is going to be an integral part of the experiments , another person needs to be hired as the instructor for the tasks involved in them. meetings were rescheduled and are now going to take place on fridays. for the next meeting , there is going to be a presentation of the modifications in the parser module of the basic system. additionally , the proposed xml model will be put to the test with concrete data. similarly , such examples will clarify issues relating to the binding and redundancy of features with common characteristics amongst the shcemas ( eg "container" for enter and "goal" for spg ). subjects in the trial runs of the experiment were given detailed descriptions of the tasks , which led to the subsequent dialogue being a re-iteration or re-phrasing of the instructions. using pictures instead would be one way to deal with the problem , however , it was deemed too laborious and it would divert the focus of the experiment. as the original action planner of the smartkom system only included a generic spg schema , a new module was presented that allows for variety in the user intentions to be included. this being only a model , there are several issues that will need to be clarified in the future. how the model deals with redundancy of information among categories and action schemas , and whether a flat or a hierarchical model would be preferable are two of them. what is also clear is that as the domain of research broadens beyond the study of eva modes , the complexity of the model will also increase. another trial run of the data collection experiment is to take place , while subjects are being recruited. there have been some adjustments in the script. the prior description of tasks the subjects are going to be given is now going to be more schematic , although the intentions are still going to be clear. the lack of detailed , written explanation will hopefully encourage more natural and varied dialogue between subjects and "wizard". on the other hand , the generator module of the system has been translated from german. eventually , a user is going to be able to request and receive tv- and cinema-related information in english. this will satisfy the basic project requirements. the model of a new module for smartkom was presented. it is an interface between the language understanding and the action planning modules. one layer of xml schemas creates a richer representation of the linguistic analysis , which is subsequently used to trigger one or more action schemas. the model keeps the concept of xml messages being sent between the modules of the system , although it is open-ended as to the number of schemas involved.
1
grad a: ok , this is one channel . can you , say your name and talk into your mike one at a time ? undergrad d: david , can we borrow your labelling machine to improve the quality of the labelling a little bit here ? grad a: cuz we ' ve already have like , forms filled out with the numbers on them . so , let 's keep the same numbers on them . undergrad d: want to join the meeting , dave ? do we do we have a spare , grad a: and i ' m getting lots of responses on different ones , so i assume the various and assorted p z ms are on . postdoc e: this is abou we 're mainly being taped but we 're gon na talk about , transcription for the m future meeting meetings . undergrad d: so , i do n't understand if it 's neck mounted you do n't get very good performance . postdoc e: why did n't i you were saying that but i could hear you really on the transcription on the , tape . grad a: but they were complaining about it . because it 's not it does n't go over the ears . phd b: why ? it 's not s it 's not supposed to cover up your ears . postdoc e: so that 's what you 're d he 's got it on his temples so it cuts off his circulation . grad a: , i ' m just that digit - y g sorta guy . so this is adam . postdoc e: now , just to be , the numbers on the back , this is the channel ? postdoc e: good . , this is jane , on mike number five . start ? do i need to say anything more ? grad a: should i turn off the vu meter dan ? do you think that makes any difference ? grad a: the vu meter which tells you what the levels on the various mikes are and there was one hypothesis that perhaps that the act of recording the vu meter was one of the things that contributed to the errors . undergrad d: , but eric , you did n't think that was a reasonable hypothesis , right ? grad a: , the only reason that could be is if the driver has a bug . right ? because the machine just is n't very heavily loaded . phd b: no , we do n't . but we ought to st we ought to standardize . phd b: , i s i spoke to somebody , morgan , about that . i think we should put mar , no , w we can do that . phd b: but i think we should put them in standard positions . we should make little marks on the table top . grad a: which means we need to move this thing , and sorta decide how we 're actually going to do things . phd b: i that 's the point . it 'll be a lot easier if we have a if we have them permanently in place like that . undergrad d: , lila actually is almost getting r pretty close to even getting ready to put out the purchase order . i handed it off to her about a month ago . grad a: topic of this meeting is i wanna talk a little bit about transcription . , i ' ve looked a little bit into commercial transcription services and jane has been working on doing transcription . , and so we wan wanna decide what we 're gon na do with that and then get an update on the electronics , and then , maybe also talk a little bit about some infrastructure and tools , and so on . , eventually we 're probably gon na wanna distribute this thing and we should decide how we 're gon na handle some of these factors . grad a: , so we 're collecting a corpus and it 's gon na be generally useful . , it seems like it 's not a corpus which is , has been done before . and so people will be interested in having it , and so we will grad a: and and so how we do we distribute the transcripts , how do we distribute the audio files , how do we just do all that infrastructure ? phd c: , for that particular issue ther there are known sources where people go to find these things like the ldc . phd b: right . the it 's not so much the actu the logistics of distribution are secondary to preparing the data in a suitable form for distribution . grad a: so , as it is , it 's a ad - hoc combination of dan set and i set up , which we may wanna make a little more formal . phd b: and the other thing is that , university of washington may want to start recording meetings as , in which case w we 'll have to decide what we ' ve actually got so that we can give them a copy . grad a: i was actually thinking i would n't mind spending the summer up there . that would be fun . grad a: , and then also i have a bunch of for doing this digits . so i have a bunch of scripts with x waves , and some perl scripts , and other things that make it really easy to extract out and align where the digits are . and if u d uw 's going to do the same thing it 's worth while for them to do these digits tasks as . grad a: and what i ' ve done is pretty ad - hoc , so we might wanna change it over to something a little more standard . , stm files , or xml , . grad a: they certainly wanna collect more data . and so they 're applying , i b is that right ? something like that . , for some more money to do more data . so we were planning to do like thirty or forty hours worth of meetings . they wanna do an additional hundred or so hours . so , they want a very large data set . , but we 're not gon na do that if we do n't get money . and i would like that just to get a disjoint speaker set and a disjoint room . , one of the things morgan and i were talking about is we 're gon na get to know this room really , phd b: , now you ' ve touched the fan control , now all our data 's gon na be undergrad d: you think that things after the f then this fan 's wired backwards . , this is high speed here . undergrad d: so , do our meetings in the dark with no air conditioning in the future . phd c: it would , it would real really mean that we should do short meetings when you turn off the air conditioning , undergrad d: actually , the a th air the air conditioning 's still working , that 's just an auxiliary fan . phd c: so , in addition to this issue about the uw there was announced today , via the ldc , a corpus from i believe santa barbara . phd c: , of general spoken english . and i exactly how they recorded it but there 's a lot of different styles of speech and what not . postdoc e: i assume so , actually , i had n't thought about that . unless they added close field later on but , i ' ve listened to some of those data and i , i ' ve been i was actually on the advisory board for when they set the project up . grad a: because it would to be able to take that and adapt it to a meeting setting . phd c: , what i was thinking is it may be useful in transcribing , if it 's far field , right ? in doing , some of our first automatic speech recognition models , it may be useful to have that data postdoc e: and and their recording conditions are really clean . , i ' ve heard i ' ve listened to the data . undergrad d: not head mounted ? and so that 's why they 're getting away with just two channels , or are they using multiple dats ? phd c: no , and their web page did n't answer it either . so i ' m , i , was thinking that we should contact them . so it 's that 's a beside - the - point . but . phd c: so , i can actually arrange for it to arrive in short order if we 're grad a: , it 's silly to do unless we 're gon na have someone to work on it , so maybe we need to think about it a little bit . postdoc e: the other thing too is that their jus their transcription format is really and simple in the discourse domain . but they also mentioned that they have it time aligned . , i s i saw that write - up . phd c: maybe we should maybe we should get a copy of it just to see what they did phd c: alright , i 'll do that . i ca n't remember the name of the corpus . it 's corps - s postdoc e: , sp i ' ve been i was really pleased to see that . i knew that they had some funding problems in completing it phd c: and the there 's still more that they 're gon na do like that unless they have funding issues phd c: and then it ma they may not do phase two but from all the web documentation it looked like , " , this is phase one " , whatever that means . postdoc e: super . , that , they 're really respected in the linguistics d side too and the discourse area , and so this is a very good corpus . phd c: but , it would also maybe help be helpful for liz , if she wanted to start working on some discourse issues , looking at some of this data and then , so when she gets here maybe that might be a good thing for her . grad a: actually , that 's another thing i was thinking about is that maybe jane should talk to liz , to see if there are any transcription issues related to discourse that she needs to get marked . phd c: but maybe we should , find some day that liz , liz and andreas seem to be around more often . so maybe we should find a day when they 're gon na be here and morgan 's gon na be here , and we can meet , at least this subgroup . , not necessarily have the u - dub people down . grad a: , i was even thinking that maybe we need to at least ping the u - dub to see grad a: , say " this is what we 're thinking about for our transcription " , if nothing else . so , w shall we move on and talk a little bit about transcription then ? grad a: so since that 's what we 're talking about . what we 're using right now is a tool , from this french group , called " transcriber " that seems to work very . so it has a , useful tcl - tk user interface and , undergrad d: thi - this is the process of converting audio to text ? and this requires humans just like the stp . grad a: yes , right , right . so we 're at this point only looking for word level . so all so what you have to do is just identify a segment of speech in time , and then write down what was said within it , and identify the speaker . and so the things we that we know that i know i want are the text , the start and end , and the speaker . but other people are interested in stress marking . and so jane is doing primary stress , stress marks as . , and then things like repairs , and false starts , and , filled pauses , and all that other , we have to decide how much of that we wanna do . postdoc e: i did include a glo , a certain first pass . my my view on it was when you have a repair then , it seems , we saw , there was this presentation in the one of the speech group meetings about how and liz has done some too on that , that it , that you get it bracketed in terms of like , if it 's parenthetical , which i know that liz has worked on , then y you 'll have different prosodic aspects . and then also if it 's a r if it 's a repair where they 're like what did , then it 's to have a sense of the continuity of the utterance , the start to be to the finish . and , it 's a little bit deceptive if you include the repai the pre - repair part and sometimes or of it 's in the middle . anyway , so what i was doing was bracketing them to indicate that they were repairs which is n't , very time - consuming . undergrad d: i is there already some plan in place for how this gon na be staffed or done ? or is it real is that what we 're talking about here ? grad a: that 's part of the thing we 're talking about . so what we wanted to do was have jane do one meeting 's worth , forty minutes to an hour , and grad a: ten times about , is and so one of the things was to get an estimate of how long it would take , and then also what tools we would use . and so the next decision which has to be made actually pretty soon is how are we gon na do it ? undergrad d: and so you make jane do the first one so then she can decide , we do n't need all this , just the words are fine . postdoc e: i wanna hear about these , we have a g you were s continuing with the transcription conventions for s grad a: r right , so one option is to get linguistics grad students and undergrads to do it . and that 's happened in the past . and that 's probably the right way to do it . , it will require a post pass , people will have to look at it more than once to make that it 's been done correctly , but ca n't imagine that we 're gon na get anything that much better from a commercial one . and the commercial ones i ' m will be much more expensive . grad a: but , that 's what we 're talking about is getting some slaves who need money grad a: i meant joy . and so again , i have to say " are we recording " and then say , morgan has consistently resisted telling me how much money we have . postdoc e: th there is als , really . there is also the o other possibility which is if you can provide not money but instructional experience or some other perks , you can you could get people to , to do it in exchange . undergrad d: , i b but , morgan 's in a bind over this and thing to do is just the field of dreams theory , which is we go ahead as though there will be money at the time that we need the money . and that 's the best we can do . i b to not do anything until we get money is ridiculous . we 're not gon na do any get anything done if we do that . grad a: so at any rate , jane was looking into the possibility of getting students , at is that right ? talking to people about that ? postdoc e: i ' m afraid i have n't made any progress in that front yet . i should ' ve sent email and i have n't yet . undergrad d: i d do so until you actually have a little experience with what this french thing does we do n't even have undergrad d: i ' m . so that 's where you came up with the f the ten x number ? or is that really just a ? postdoc e: i have n't done a s see , i ' ve been at the same time doing a boot strapping in deciding on the transcription conventions that are , and like , how much postdoc e: there 's some interesting human factors problems like , what span of time is it useful to segment the thing into in order to , transcribe it the most quickly . cuz then , you get like if you get a span of five words , that 's easy . but then you have to take the time to mark it . and then there 's the issue of it 's easier to hear it th right the first time if you ' ve marked it at a boundary instead of somewhere in the middle , cuz then the word 's bisected or whatever and so , i ' ve been playing with , different ways of mar cuz i ' m thinking , , if you could get optimal instructions you could cut back on the number of hours it would take . undergrad d: d does this tool you 're using is strictly it does n't do any speech recognition does it ? postdoc e: no , it does n't but what a super tool . it 's a great environment . undergrad d: but but is there anyway to wire a speech recognizer up to it and actually run it through undergrad d: , a couple things . first of all the time marking you 'd get you could get by a tool . undergrad d: , i ' m think about the close caption that you see running by on live news casts . undergrad d: no , i understand . and in a lot of them you see typos and things like that , but it occurs to me that it may be a lot easier to correct things than it is to do things from scratch , no matter how wonderful the tool is . phd c: , but sometimes it 's easier to type out something instead of going through and figuring out which is the right undergrad d: s but again the timing is for fr should be for free . the timing should be grad a: we do n't care about the timing of the words , just of the utterances . phd b: we have n't decided which time we care about , and that 's one of the things that you 're saying , is like you have the option to put in more or less timing data and , be in the absence of more specific instructions , we 're trying to figure out what the most convenient thing to do is . grad a: so what she 's done so far , is more or less breath g not breath groups , phrases , continuous phrases . and so , that 's because you separate when you do an extract , you get a little silence on either end . so that seems to work really . postdoc e: that 's ideal . although i was i , the alternative , which i was experimenting with before i ran out of time , recently was , that , ev if it were like an arbitrary segment of time i t pre - marked cuz it does take time to put those markings in . it 's really the i the interface is wonderful because , the time it takes is you listen to it , and then you press the return key . but then , it 's like , you press the tab key to stop the flow and , the return key to p to put in a marking of the boundary . but , there 's a lag between when you hear it and when you can press the return key so it 's slightly delayed , so then you listen to it a second time and move it over to here . undergrad d: ar but are are those d delays adjustable ? those delays adjustable ? see a lot of people who actually build with human computer interfaces understand that delay , and so when you by the time you click it 'll be right on because it 'll go back in time to put the phd b: but we ' ve got the most channel data . we 'd have to do it from your signal . , we ' ve got a lot of data . grad a: , i the question is how much time will it really save us versus the time to write all the tools to do it . phd b: but the chances are if we 're talking about collecting ten or a hundred hours , which is going to take a hundred or a thousand hours to transcribe grad a: we 're gon na need we 're gon na need ten to a hundred hours to train the tools , and validate the tools the do the d to do all this anyway . postdoc e: i ' m . i wish you had told me wish you 'd told me . phd b: , i it seems like , i . , it 's maybe like a week 's work to get to do something like this . so forty or fifty hours . postdoc e: could you get it so that with so it would detect volume on a channel and insert a marker ? and the format 's really transparent . it 's just a matter of a very c clear it 's xml , is n't it ? it 's very , i looked at the file format and it 's just it has a t a time indication and then something or other , and then an end time or other . phd c: so maybe we could try the following experiment . take the data that you ' ve already transcribed and postdoc e: s i ' m not if it 's that 's much but anyway , enough to work with . phd c: , and throw out the words , but keep the time markings . and then go through , and go through and try and re - transcribe it , given that we had perfect boundary detection . postdoc e: , that 's what i was thinking . i 'd be cheating a little bit g with familiarity effect . phd c: , that 's part of the problem is , is that what we really need is somebody else to come along . phd b: , no , you should do it do it again from scratch and then do it again at the boundaries . so you do the whole thing three times and then we get undergrad d: and then w since we need some statistics do it three more . and so you 'll get down to one point two x by the time you get done . postdoc e: i 'll do that tomorrow . i should have it finished by the end of the day . undergrad d: no , but the fact that she 's did it before just might give a lower bound . that 's all . , which is fine . undergrad d: it 's and if the lower bound is nine x then w it 's a waste of time . postdoc e: , but there 's an extra problem which is that i did n't really keep accurate postdoc e: , it was n't a pure task the first time , so , it 's gon na be an upper bound in that case . and it 's not really strictly comparable . so though it 's a good proposal to be used on a new batch of text that i have n't yet done yet in the same meeting . could use it on the next segment of the text . phd b: the point we where do we get the oracle boundaries from ? or the boundaries . grad a: , one person would have to assign the boundaries and the other person would have to postdoc e: , but the oracle boundaries would come from volume on a partic specific channel would n't they ? phd c: no , no . you wanna know given given a perfect human segmentation , you wanna know how , the question is , is it worth giving you the segmentation ? grad a: , that 's easy enough . i could generate the segmentation and you could do the words , and time yourself on it . grad a: that would at least tell us whether it 's worth spending a week or two trying to get a tool , that will compute the segmentations . undergrad d: and the thing to keep in mind too about this tool , guys is that , you can do the computation for what we 're gon na do in the future but if uw 's talking about doing two , or three , or five times as much and they can use the same tool , then there 's a real multiplier there . postdoc e: and the other thing too is with speaker identification , if that could handle speaker identification that 's a big deal . postdoc e: , that 's a feature . that 's a major that 's like , one of the two things that phd c: , there 's gon na be in the meeting , like the reading group meeting that we had the other day , that 's it 's gon na be a bit of a problem because , like , i was n't wearing a microphone f and there were other people that were n't wearing microphones . phd b: so i need to we need to look at what the final output is but it seems like we it does n't it seems like it 's not really not that hard to have an automatic tool to generate the phrase marks , and the speaker , and speaker identity without putting in the words . grad a: so it would be easy . if you 'd tell me where it is , ? postdoc e: we did n't finish the part of work already completed on this , did we ? , you talked a little bit about the transcription conventions , and , i you ' ve mentioned in your progress report , or status report , that you had written a script to convert it into so , i when i i the it 's quickest for me in terms of the transcription part to say something like , if adam spoke to , to just say , " a colon " , like who could be , at the beginning of the line . and e colon instead of entering the interface for speaker identification and clicking on the thing , indicating the speaker id . so , and then he has a script that will convert it into the thing that , would indicate speaker id . grad a: so so the at ten x seems to be pretty standard . everyone more or less everyone you talk to says about ten times for hard technical transcription . grad a: which is a service that you send an audio file , they do a first - pass speech recognition . and then they do a clean up . but it 's gon na be horrible . they 're never gon na be able to do a meeting like this . grad a: , for cyber transcriber they do n't quote a price . they want you to call and talk . so for other services , they were about thirty dollars an hour . undergrad d: d did you talk to anybody that does closed captioning for , tv ? cuz they a usually at the end of the show they 'll tell what the name of the company is , the captioning company that 's doing it . grad a: it was just a net search . and , so it was only people who have web pages and are doing through that . undergrad d: , the thing about this is thinking , maybe a little more globally than i should here but that really this could be a big contribution we could make . , we ' ve been through the stp thing , we it what it 's like to manage the process , and admittedly they might have been looking for more detail than what we 're looking for here but it was a big hassle , right ? , , they constantly could ' ve reminding people and going over it . and clearly some new needs to be done here . and it 's only our time , where " our " includes dan , dan and you guys . it does n't include me . j just seems like phd b: , i if we 'd be able to do any thing f to help stp type problems . but certainly for this problem we can do a lot better than phd b: because they had because they only had two speakers , right ? , the segmentation problem is grad a: , mostly because they were doing much lower level time . so they were doing phone and syllable transcription , as , word transcription . and so we 're w we decided early on that we were not gon na do that . undergrad d: i see . but there 's still the same issue of managing the process , of reviewing and keeping the files straight , and all this , that which is clearly a hassle . grad a: and so what i ' m saying is that if we hire an external service we can expect three hundred dollars an hour . that 's the ball park . there were several different companies that and the range was very tight for technical documents . twenty - eight to thirty - two dollars an hour . phd c: and who knows if they 're gon na be able to m manage multal multiple channel data ? grad a: sev - several of them say that they 'll do meetings , and conferences , and s and so on . none of them specifically said that they would do speaker id , or speaker change mark . they all just said transcription . undergrad d: th - th the th there may be just multiplier for five people costs twice as much and for ten people co something like that . grad a: , the way it worked is it was scaled . so what they had is , if it 's an easy task it costs twenty - four dollars an hour and it will take maybe five or six times real time . and what they said is for the hardest tasks , bad acoustics , meeting settings , it 's thirty - two dollars an hour and it takes about ten times real time . so that we can count on that being about what they would do . it would probably be a little more because we 're gon na want them to do speaker marking . undergrad d: a lot of companies i ' ve worked for y the , the person leading the meeting , the executive or whatever , would go around the room and mentally calculate h how many dollars per hour this meeting was costing , in university atmosphere you get a little different thing . but , it 's a lot like , " he 's worth fifty an hour , he 's worth " and so he so here we 're thinking , " let 's see , if the meeting goes another hour it 's going to be another thousand dollars . " ? grad a: but at any rate , so we have a ballpark on how much it would cost if we send it out . undergrad d: , i ' m , three hundred . right , i w got an extra factor of three there . phd c: so it 's thirty dollars an hour , essentially , right ? but we can pay a graduate student seven dollars an hour . and the question is what 's the difference phd c: or ei eight dollars . what do what the going rate is ? it 's it 's on the order of eight to ten . phd c: , i what the standard but there is a standard pay scale what it is . phd c: so that means that even if it takes them thirty times real time it 's cheaper to do graduate students . grad a: , that 's why i said originally , that i could n't imagine sending it out 's gon na be cheaper . postdoc e: the other thing too is that , if they were linguistics they 'd be , in terms of like the post editing , i tu content wise they might be easier to handle cuz they might get it more right the first time . grad a: and also we would have control of , we could give them feedback . whereas if we do a service it 's gon na be limited amount . grad a: , we ca n't tell them , " for this meeting we really wanna mark stress grad a: and for this meeting we want " and and they 're not gon na provide stress , they 're not gon na re provide repairs , they 're not gon na provide they may or may not provide speaker id . so that we would have to do our own tools to do that . undergrad d: just hypoth hypothetically assuming that we go ahead and ended up using graduate students . i who 's the person in charge ? who 's gon na be the steve here ? postdoc e: , interesting . , now would this involve some manner of , monetary compensation or would i be the voluntary , coordinator of multiple transcribers for checking ? grad a: , i would imagine there would be some monetary involved but we 'd have to talk to morgan about it . undergrad d: , it just means you have to stop working for dave . see ? that 's why dave should have been here . grad a: , i would like you to do it because you have a lot more experience than i do , but if that 's not feasible , i will do it with you as an advisor . undergrad d: w we 'd like you to do it and we 'd like to pay you . postdoc e: boy , if i wanted to increase my income i could start doing the transcribing again . undergrad d: an and be and say , would you like fries with that when you 're thinking about your pay scale . postdoc e: , no , that i would be interested in that in becoming involved in the project in some aspect like that phd b: what s so what are you so you ' ve done some portion of the first meeting . and what 's your plan ? postdoc e: what , what was right now we have p so i gave him the proposal for the transcription conventions . he made his , suggestion of improvement . the the it 's a good suggestion . so as far as i ' m concerned those transcription conventions are fixed right now . and so my next plan would be postdoc e: they 're very minimal . so , it would be good to just to summarize that . so , one of them is the idea of how to indicate speaker change , and this is a way which meshes with , making it so that , , on the at the boy , it 's such a interface . when you when you get the , you get the speech signal you also get down beneath it , an indication of , if you have two speakers overlapping in a s in a single segment , you see them one displayed one above each other . and then at the same time the top s part of the screen is the actual verbatim thing . you can clip click on individual utterances and it 'll take you immediately to that part of the speech signal , and play it for you . and you can , you can work pretty between those two these two things . grad a: , the user interface only allows two . and so if you 're using their interface to specify overlapping speakers you can only do two . but my script can handle any . and their save format can handle any . and so , using this the convention that jane and i have discussed , you can have as many overlapping speakers as you want . undergrad d: do y is this a , university project ? th - this is the french software , right ? grad a: multichannels was also , they said they wanted to do it but that the code is really very organized around single channels . so that 's n unlikely to ha happen . undergrad d: do - do what they 're using it for ? why 'd they develop it ? phd c: they 're they have some connection to the ldc cuz the ldc has been advising them on this process , the linguistic data consortium . so but a apart from that . grad a: it 's also all the source is available . if you if you speak tcltk . grad a: and they have they ' ve actually asked if we are willing to do any development and i said , maybe . so if we want if we did something like programmed in a delay , which actually is a great idea , i ' m they would want that incorporated back in . postdoc e: pre - lay . , and they ' ve thought about things . , they do have so you have when you play it back , it 's it is useful to have , a break mark to se segment it . but it would n't be strictly necessary cuz you can use the , the tabbed key to toggle the sound on and off . , it 'll stop the s speech if you press a tab . and , . and so , that 's a feature . and then also once you ' ve put a break in then you have the option of cycling through the unit . you could do it like multiply until you get crazy and decide to stop cycling through that unit . undergrad d: loop it ? yo - you n , there 's al also the user interface that 's missing . undergrad d: it 's missing from all of our offices , and that is some analog input for something like this . it 's what audio people actually use . it 's something that wh when you move your hand further , the sound goes faster past it , like fast forward . , like a joy stick or a , you could wire a mouse or trackball to do something like that . undergrad d: no , but i ' m saying if this is what professionals who actually do this thing for m for video or for audio where you need to do this , and so you get very good at jostling back and forth , rather than hitting tab , and backspace , and carriage return , and enter , and things like that . grad a: , we talked about things like foot pedals and other analog so , tho those are things we could do how much it 's worth doing . we 're just gon na have postdoc e: i agree . they they have several options . so , , i mentioned the looping option . another option is it 'll pause when it reaches the end of the boundary . and then to get to the next boundary you just press tab and it goes on to the next unit . , it 's very nicely thought out . they thought about and also it 'll go around the c the , i wanna say cursor but i ' m not if that 's the right thing . postdoc e: anyway , you can so they thought about different ways of having windows that you c work within , and but so in terms of the con the conventions , then , , it 's strictly orthographic which means with some w provisions for , w , colloquial forms . so if a person said , " cuz " instead of " because " then i put a an apostrophe at the beginning of the word and then in double ang angle brackets what the full lexical item would be . and this could be something that was handled by a table but to have a convention marking it as a non - standard or wha i do n't mean standard but a non , ortho orthographic , whatever . postdoc e: gon na or " wanna " , the same thing . and and there would be limits to how much refinement you want in indicating something as non - standard pres pronunciation . postdoc e: , yes , there was some in my view , when i when you ' ve got it densely overlapping , i did n't worry about s specific start times . postdoc e: that this is not gon na be easily processed anyway and maybe i should n't spend too much time getting exactly when the person said " no " , or , , i " immediate " . and instead just rendered " within this time slot , there were two people speaking during part of it and if you want more detail , figure it out for yourself " , grad a: , what w what eric was talking about was channels other than the direct speech , phd c: , what is wh , when somebody says " - " in the middle of , a @ phd c: , cuz i was listening to dan was agreeing a lot to things that you were saying as you were talking . postdoc e: appreciate it . , if it if there was a word like " right " , then i wou i would indicate that it happened within the same tem time frame phd b: i transcribed a minute of this and there was a lot of overlapping . it was grad a: , when no one i when we 're not actually in the meeting , and we 're all separated , and doing things . but even during the meeting there 's a lot of overlap but it 's marked pretty clearly . , some of the backchannel jane had some comments and but a lot of them were because you were at the meeting . and so that often you ca n't tell . postdoc e: only when it was otherwise gon na be puzzling because he was in the other room talking . grad a: , but someone who , was just the transcriber would n't have known that . or when dan said , " i wa i was n't talking to you " . postdoc e: he was so he was checking the meter levels and we were handling things while he was labeling the whatever it was , the pda ? and and so he was in you were talking so i was saying , like " and i could label this one left . right ? " and he and he said , " i do n't see anything " . and he said , " i was n't talking to you " . or it was n't it did n't sound quite that rude . but really , no , w in the context if he ca n't hear what he 's saying grad a: what it what happens is if you 're a transcriber listening to it sounds like dan is just being a total impolite . postdoc e: , you 'll see . you can listen to it . , it was you who was . no , but you were asking off the wall questions . grad a: but but if you knew that i was n't actually in the room , and that dan was n't talking to me , it became ok . postdoc e: and that 's w that 's where i added comments . the rest of the time i did n't bother with who was talking to who but this was unusual circum circumstance . postdoc e: and part of it was funny , reason was because it was a mixed signal so you could n't get any clues from volume that , he was really far away from this conversation . grad a: i should rewrite the mix tool to put half the people in one channel and half in the other . i have a auto - gain - mixer tool that mixes all the head mounted microphones into one signal postdoc e: but it would be , i did n't wanna add more contextual comments than were needed but that , it seemed to me , clarified that the con what was going on . and , phd c: i was just gon na ask , so wanted to c finish off the question i had about backchannels , if that 's ok , which was , so say somebody 's talking for a while and somebody goes " - " in the middle of it , and what not , does the conversation come out from the or the person who 's speaking for the long time as one segment and then there 's this little tiny segment of this other speaker or does it does the fact that there 's a backchannel split the it in two . postdoc e: ok , my focus was to try and maintain conten con content continuity and , to keep it within what he was saying . like i would n't say breath groups but prosodic or intonational groups as much as possible . so if someone said " - " in the middle of a of someone 's , intonational contour , i indicated it as , like what you just did . then i indicated it as a segment which contained @ this utterance plus an overlap . phd b: but that 's but there 's only one time boundary for both speakers , right ? undergrad d: whenever we use these speech words we should always do the thing like you 're talking about , accent , postdoc e: and then " hesitation " . and so then , in terms of like words like " and " wrote them because i figured there 's a limited number , and i keep them to a , limited set because it did n't matter if it was " mmm " or " , versus " . so always wrote it as u m . and " - " , " uhuh . " , like a s set of like five . but in any case i did n't mark those . postdoc e: i 'd be happy with that . that 'd be fine . it 'd be good to have that in the conventions , what 's to be used . grad a: i did notice that there were some segments that had pauses on the beginning and end . we should probably mark areas that have no speakers as no speaker . then , so question mark colon is fine for that . grad a: , i wanna leave the marked i do n't want them to be part of another utterance . so you just you need to have the boundary at the start and the end . postdoc e: now that 's refinement that , maybe it could be handled by part of the script more phd b: it seems like the , tran the transcription problem would be very different if we had these automatic speaker detection turn placing things . because suddenly , i , actually it sounds like there might be a problem putting it into the software if the software only handles two parallel channels . but assuming we can get around that somehow . grad a: it can read and write as many as you want , it 's just that it phd b: but what if you wanna edit it ? right ? , we 're gon na generate this transcript with five tracks in it , but with no words . someone 's gon na have to go in and type in the words . and if there are five people speaking at once , grad a: right , i it 's i did n't explain it . if we use the little the conventions that jane has established , i have a script that will convert from that convention to their saved convention . postdoc e: which allows five . and it can be m edited after the fact , ca n't it also ? but their but their format , if you wanted to in indicate the speakers right there instead of doing it through this indirect route , then i they a c window comes up and it only allows you to enter two speakers . undergrad d: but you 're saying that by the time you call it back in to from their saved format it opens up a window with five speakers ? grad a: the whole saved form the saved format and the internal format , all that , handles multiple speakers . grad a: it 's just there 's no user interface for specifying multiple any more than two . undergrad d: cuz we 're always gon na wanna go through this preprocessing , assuming it works . postdoc e: and that works nicely cuz this so quick to enter . so i would n't wanna do it through the interface anyway adding which worry who the speaker was . and then , let 's see what else . i wanted to have so sometimes a pers i in terms of like the continuity of thought for transcriptions , it 's i it is n't just words coming out , it 's like there 's some purpose for an utterance . and sometimes someone will do a backchannel in the middle of it but you wanna show that it 's continued at a later point . so i have a convention of putting like a dash arrow just to indicate that this person 's utterance continues . and then when it , catches back up again then there 's an arrow dash , and then you have the opposite direction to indicate continuation of ones own utterance versus , sometimes we had the situation which is , which you get in conversations , of someone continuing someone else 's utterance , and in that case i did a tilde arrow versus a arrow tilde , to indicate that it was continuation but it was n't , i did equal arrow for the own for yourself things cuz it 's the speakers the same . and then tilde arrow if it was a different if a different speaker , con continuation . but just , the arrows showing continuation of a thought . and then you could track whether it was the same speaker or not by knowing , at the end of this unit you 'd happened later . and that was like this person continued and you 'd be able to look for the continuation . phd b: but the only time that becomes ambiguous is if you have two speakers . like , if you if you only have one person , if you only have one thought that 's continuing across a particular time boundary , you just need one arrow at each end , and if it 's picked up by a different speaker , it 's picked up by a different speaker . the time it becomes ambiguous if you have more than one speaker and that and they swap . i if you have more than one thread going , then you need to know whether they were swapped or not . grad a: especially for meetings . , if i if you were just recording someone 's day , it would be impossible . , grad a: if you were trying to do a remembrance agent . but for meetings it 's probably alright . but , a lot of these issues , that for , from my point of view , where wanna do speech recognition and information retrieval , it does n't really matter . but other people have other interests . phd b: i know . but it does feel like it 's really in there . i did this transcription and i marked that , i marked it with ellipsis because it seemed like there was a difference . it 's something you wanted to indicate that it that i this was the end of the phrase , this was the end of that particular transcript , but it was continued later . and i picked up with an ellipsis . postdoc e: that 's , i that 's why i did n't do it n , that 's why about it , and re - ev and it did n't do i did n't do it in ten times the time . grad a: , so anyway , are we interested then in writing tools to try to generate any of this automatically ? is that something you want to do , dan ? postdoc e: i also wanted to ask you if you have a time estimate on the part that you transcribed . do you have a sense of how long phd b: , it took me half an hour to transcribe a minute , but i did n't have any i did n't even have a i was trying to get transcriber to run but i could n't . so i was doing it by typ typing into a text file and trying to fit it was horrible . undergrad d: so thirty to one 's what you got ? so that 's a new upper limit ? phd c: so so if we hired a who if we hired a whole bunch of dan 's undergrad d: d does n't it beep in the other room when you 're out of disk space ? phd c: is there maybe we should s consider also , starting to build up a web site around all of these things . phd c: i want to introduce the word " snot - head " into the conversation at this point . phd c: alright , see here 's my thought behind it which is that , the that you ' ve been describing , jane , i gu one has to , indicate , i is very interesting , i 'd like to be able to pore through , the types of tr conventions that you ' ve come up with and like that . so i would like to see that on the web . postdoc e: now , w the alternative to a web site would be to put it in doctor speech . cuz cuz what i have is a soft link to my transcription that i have on my account postdoc e: web site 's . then you have to t you have to do an ht access . phd b: we could actually maybe we could use the tcl plug - in . , man . undergrad d: but he does such a good job of it . he should be allowed to , w do it . undergrad d: if you just did a crappy job , no nobody would want you to do it . phd b: i sh i should n't be allowed to by m by my own by my according to my own priorities . alright . let 's look at it anyway . so definitely we should have some access to the data . grad a: and we have quite a disparate number of web and other sorts of documents on this project spread around . i have several and dan has a few , grad a: , i ' m talking about putting together all the data in a form that is legible , and pleasant to read , and up to date , and et cetera , et cetera . undergrad d: but , is it against the law to actually use a tool to help your job go easier ? grad a: it 's it 's against the law to use a tool . i have n't found any tools that i like . grad a: it 's just as easy to use to edit the raw html as anything else . undergrad d: no , it 's true that he has n't found any he likes . the question is what 's he looked at . grad a: now , i if i were doing more powerful excuse me more complex web sites i might want to . postdoc e: , would this be to document it also for outside people or mainly for in house use ? undergrad d: , you 're leaving . people at uw wanna look at it . , it 's internal until grad a: send me links and i wi send me pointers , rather , and i 'll put it together . phd b: i ' m not how important that distinction is . i do n't think we should say , " , it 's internal therefore we do n't have to make it very good " . , you can say " , it 's internal therefore we can put data in it that we do n't have to worry about releasing " . but to try and be coherent and make it a presentation . undergrad d: i was looking for the actual box i plan to use , but i c all i could i could n't find it at the local store . but this is the technology . it 's actually a little bit thinner than this . and it 's two by two , by one , and it would fit right under the right under th the lip , undergrad d: there 's a lip in these tables . and , it oc i p especially brought the bottom along to try and generate some frequencies that you may not already have recorded . undergrad d: let 's see what it does to the but this was the just to review , and i also brought this along rather than the projector so we can put these on the table , and w push them around . undergrad d: that that 's the six tables that we 're looking at . these six tables here , with little boxes , in the middle here . which es would , the boxes are out of the way anyway . i 'll - i 'll show you the cro this is the table cross section . i if people realize what they 're looking at . undergrad d: why not ? , cuz this is what 's gon na happen . you got plenty of data . i wo n't come to your next meeting . and and you so this is the box 's undergrad d: or not to be . , the box , there 's a half inch lip here . the box is an inch thick so it hangs down a half an inch . and so the two head set jacks would be in the front and then the little led to indicate that box is live . the the important issue about the led is the fact that we 're talking about eight of these total , which would be sixteen channels . and , even though we have sixteen channels back at the capture , they 're not all gon na be used for this . so there 'd be a subset of them used for j just use the ones at this end for this many . so excuse me . you 'd like a way to tell whether your box is live , so the led would n't be on . undergrad d: so if you 're plugged in it does n't work and the led is off that 's a tip off . and then the , would wire the all of the cables in a bundle come through here and o collect these cables at the same time . , so this notion of putting down the p z ms and taking them away would somehow have to be turned into leaving them on the table undergrad d: or and so the you we just epoxy them down . big screw into the table . undergrad d: and even though there 's eight cables they 're not really very big around so my model is to get a p piece of undergrad d: , that people put with the little you slip the wires into that 's shaped like that cross section . undergrad d: i ' m r a i ' m going up and then i ' m going down . phd b: , because then you do n't have to just have one each . so that if t if you have two people sitting next to each other they can actually go into the same box . undergrad d: and to see , thi this is really the way people sit on this table . th dot , dot . undergrad d: that 's the way people sit . that 's how many chairs are in the room . undergrad d: , true enough . and actually , at the m my plan is to only bring eight wires out of this box . undergrad d: and , it 's function is to s to , essentially a wire converter to go from these little blue wires to these black wires , plus supply power to the microphones cuz the he the , cheap head mounteds all require low voltage . grad a: so so you 'd imagine some in some patch panel on top to figure out what the mapping was between each d of these two and each of those one or what ? undergrad d: i w i the simplest thing i could imagine , i which is really , really simple is to quite literally that these things plug in . and there 's a plug on the end of each of these , ei eight cables . undergrad d: an - and there 's only four slots that are , in the first version or the version we 're planning to build . so that was the whole issue with the led , that you plug it in , the led comes on , and you 're live . undergrad d: now the subtle issue here is that tha i have n't really figured out a solution for this . so , we it 'll have to be convention . what happens if somebody unplugs this because they plug in more of something else ? the there 's no clever way to let the up stream guys know that you 're really not being powered . th there will be a certain amount of looking at cables has to be done if people , rewire things . phd b: , we i had that last time . but there are actually that , there 's an extra there 's a mix out on the radio receiver ? so there are actually six xlr outs on the back of the radio receiver and only five cables going in , i had the wrong five , so i ended up not recording one of the channels and recording the mix . undergrad d: how interesting . d did you do any recognition on the mix out ? wonder whether it works any phd b: but i subtracted the four that i did have from the mix and got a pretty good approximation of the @ . undergrad d: i was wrestling with th with literally the w number of connectors in the cable and the , powering system . and i was gon na do this very clever phantom power and i decided a couple days ago not to do it . so i ' m ready to build it . which is to say , the neighborhood of a week to get the circuit board done . grad a: so the other thing i 'd like to do is , do something about the set up grad a: so that it 's a little more presentable and organized . and i ' m just not what that is . , some cabinet . undergrad d: build a cabinet . the the difficulty for this project is the intellectual capital to design the cabinet . in other words , to figure out ex exactly what the right thing is . that cabinet can go away . we can use that for kindling . but if you can imagine what the right form factor is . dan - dan and i have gone around on this , and we were thinking about something that opened up in the top to allow access to the mixer . but there 's these things sticking out of the mixer which are a pain , so you end up with this thing that if you stuck the mixer up here and the top opened , it 'd be fine . you would n't necessarily you s understand what i ' m undergrad d: the you can start s sketching it out , and certainly build it out of oak no problem , would it , arb , arbitrarily amount of grad a: i need a desk at home too , alright ? is that gon na be a better solution than just going out and buy one ? undergrad d: , the as we found out with the thing that , jeff bought a long time ago to hold our stereo system the you buy is total crap . and this is something you buy . undergrad d: it 's total crap . , it 's useless for this function . works fine for holding a kleenex , grad a: so , i g i it 's just a question , is that something you wanna spend your time on ? undergrad d: i have no problem . no , but w certainly one of the issues is the , is security . , we ' ve been lax and lucky . undergrad d: really lucky with these things . but they 're not ours , so the , the flat panels . grad a: i ' m telling you , i ' m just gon na cart one of them away if they stay there much longer . grad a: j , then the other question is do we wanna try to do a user interface that 's available out here ? grad a: a user interface . , do we wanna try to get a monitor ? or just something . undergrad d: how about use the thing that aciri 's doing . which is to say just laptop with a wireless . undergrad d: n no , i ' m serious . does does the wireless thing work on your phd b: no , no , that 's the right way to do it . t to have it , just undergrad d: it 's very convenient especially if dan happens to be sitting at that end of the table to not have to run down here and look in the thing every so often , phd b: and given that we ' ve got a wireless that we ' ve got a we got the field . undergrad d: but just have the it 's right there . the antenna 's right there , right outside the y , we need to clear this with aciri but , how tough can that be ? there it you 'd all you need 's web access , is n't it ? grad a: right , so it 's just a question of getting a laptop and a wireless modem . undergrad d: no , and he had , reque @ my proposal is you have a laptop . undergrad d: you do n't ? if if we bought you the thing would you mind using it with i the phd b: i would love to but i ' m not if my laptop is compatible with the wave lan thing they 're using . phd b: what you it just plug plugs in a pc card , so you could probably make it run with that , but . phd b: , i ' m . i imagine there is . but anyway there are abs there are a bunch of machines at icsi that have those cards phd b: and so if w if it does n't we should be able to find a machine that does that . i know that does n't do n't the important people have those little blue vaios that undergrad d: , b that to me that 's a whole nother . that 's a whole nother issue . the the idea of con convincing them that we should use their network i is fairly straight forward . the idea of being able to walk into their office and say , " , can i borrow your machine for a while " , is a non - starter . that i do n't think that 's gon na work . so , either we figure out how to use a machine somebody already in the group already owns , a and the idea is that if it 's it perk , it 's an advantage not a disadvan or else we literally buy a machine e exactly for that purpose . certainly it solves a lot of the problems with leaving a monitor out here all the time . i ' m not a big fan of doing things to the room that make the room less attractive for other people , which is part of the reason for getting all this out of the way and , so a monitor sitting here all the time people are gon na walk up to it and go , " how come i ca n't get , pong on this " or , whatev grad a: i ' ve borrowed the iram vaio sony thingy , and i do n't think they 're ever gon na want it back . undergrad d: , the certainly , u you should give it a shot first see whether you can get compatible . , ask them what it costs . ask them if they have an extra one . who knows , they might have an extra hardware s undergrad d: , the , tsk . it 's gon na be hooked up to all sorts of junk . there 's gon na be actually a plug at the front that 'll connect to people 's laptops so you can walk in and plug it in . and it 's gon na be con connected to the machine at the back . so we certainly could use that as a constant reminder of what the vu meters are doing . undergrad d: so people sitting here are going " testing , one , two , three " ! undergrad d: but but the idea of having a control panel it 's that 's there in front of you is really . undergrad d: as long as you d as l as long as you 're not tempted to sit there and f keep fiddling with the volume controls going , " can you talk a bit louder ? " grad a: i had actually earlier asked if i could borrow one of the cards to do wireless and they said , " , whenever you want " . so it wo n't be a problem . grad a: right , and if his does n't work , as i said , we can use the pc . undergrad d: i it 'll work it 'll work the first time . i trust steve jobs . grad a: so jim is gon na be doing wiring and you 're gon na give some thought to cabinets ? phd b: this so again , washington wants to equip a system . our system , we spent ten thousand dollars on equipment not including the pc . however , seven and a half thousand of that was the wireless mikes . using these undergrad d: and it and the f the five thousand for the wires , so if i ' m gon na do no . it 's a joke . phd b: but we have n't spent that , right ? but once we ' ve done the intellectual part of these , we can just knock them out , right ? phd b: and then we could washington could have a system that did n't have any wireless but would had what 's based on these and it would cost phd b: pc and two thousand dollars for the a - to - d . and that 's about cuz you would n't even need the mixer if you did n't have the th the p z ms cost a lot . but anyway you 'd save , on the seven or eight thousand for the wireless system . so actually that might be attractive . ok , move my thumb now . postdoc e: that 's a great idea . it 's it 's to be thinking toward that . grad a: , actually shorten there 's a speech compression program that works great on things like this , cuz if the dynamic range is low it encodes it with fewer bits . and so most of the time no one 's talking so it shortens it dramatically . but if you talk quieter , the dynamic range is lower and it will compress better . grad a: how do you spell that ? can you do one more round of digits ? are we done talking ? undergrad d: but you there 's a problem a structural problem with this though . you really need an incentive at the end if you 're gon na do digits again . like , candy bars , undergrad d: or both . eric , you and i win . we did n't make any mistakes . grad a: it 's it 's only a hard time for the transcriber not for the speech recognizer . undergrad d: very good . so eric , you win . but the other thing is that there 's a colon for transcripts . and there should n't be a colon . because see , everything else is you fill in . phd b: they start , six , seven , eight , nine , zero , one , two , three , four , five , six , eight , nine . phd b: and they 're in order because they 're sorted lexically by the file names , which are have the numbers in digits . and so they 're actually this is like all the all utterances that were generated by speaker mpj . and then within mpj they 're sorted by what he actually said . phd b: it does n't matter ! it 's like cuz you said " six , seven , eight " . undergrad d: but the real question i have is that , why bother with these ? why do n't you just ask people to repeat numbers they already know ? like phone numbers , social security numbers . undergrad d: , so you just say your credit card numbers , say your phone numbers , say your mother 's maiden name . grad a: actually , this i got this directly from another training set , from aurora . we can compare directly . phd b: there were no there were no direct driver errors , by the look of it , which is good .
the berkeley meeting recorder group discussed the aims , methods , timing , and outsourcing issues concerning transcription of the meeting recorder corpus. the transcriber software tool was introduced , along with a set of transcription conventions for coding different speech events. the prospect of sending the data to an external transcription service was weighed against that of hiring a graduate student transcriber pool. it was tentatively decided that the latter option would be less costly and allow bmr to maintain greater control over the transcription process. methods for distributing the data were briefly discussed , along with an initiative for creating a bmr project website. the group received an update on the meeting room recording setup and electronics. the group will obtain a copy of the corpus of spoken american english ( csae ) from the ldc to compare the methods and conventions used by uc santa barbara with those being considered for the meeting recorder project. speakers fe008 and me011 will experiment with pre-segmentation procedures in hopes of facilitating the transcription process. speaker fe008 will perform the transcription for one meeting ( 40-60 minutes of data ) as a pilot project. a tentative decision was made to put speaker fe008 in charge of the in-house transcription effort. modifications to the transcriber source code , e.g . adjusting the delay between the audio play function and the inputting of time boundaries , may be undertaken by the bmr group. an initiative for creating an internal bmr project website was discussed , along with ideas for providing web access to external organizations , such as the university of washington. a cabinet will be built to house wires and other electronic equipment used in the recording setup. a laptop and wireless modem will be avaiable to participants for monitoring the recording progress. how should transcripts and corresponding audio files be formatted and distributed? external transcription services are expensive , difficult to monitor , and are unlikely to be able to handle multi-channel data. it is unclear which levels of transcription should be encoded. what size of segment is most useful for doing transcriptions? which level of segmentation is most suitable for the aims of the project? transcriber's interface only allows the user to view two overlapping speakers. decisions must be made regarding the security of electronic recording equipment. efforts are in progress to collect and transcribe 30-40 hours of meeting recorder data. the transcriber tool is being used to do word-level transcriptions , and was reported to work well , when used with supplementary scripts , for specifying multiple speakers. other levels of transcription being considered include marking stress , repairs , and false starts. a set of transcription conventions have been formulated for marking colloquial forms , the continuity of utterances , etc. a cost assessment was made for sending meeting recorder data to an external transcription service. it was agreed that hiring linguistics graduate students would be cheaper and allow the group to maintain greater control over the transcription process. tentative pre-segmentation efforts will enable the automatic generation of phrase boundaries and speaker identity coding , and will be extendable for use at uw. perl and xwaves scripts are available for extracting and aligning digits data. other suggestions for future work included performing multi-channel speech/non-speech detection , and linking an asr system to the transcriber tool. the recording room electronics setup has been diagrammed and will take approximately one week to configure. it was suggested that such efforts will enable researchers at uw and other collaborating institutions to create their own recording setups more cheaply.
###dialogue: grad a: ok , this is one channel . can you , say your name and talk into your mike one at a time ? undergrad d: david , can we borrow your labelling machine to improve the quality of the labelling a little bit here ? grad a: cuz we ' ve already have like , forms filled out with the numbers on them . so , let 's keep the same numbers on them . undergrad d: want to join the meeting , dave ? do we do we have a spare , grad a: and i ' m getting lots of responses on different ones , so i assume the various and assorted p z ms are on . postdoc e: this is abou we 're mainly being taped but we 're gon na talk about , transcription for the m future meeting meetings . undergrad d: so , i do n't understand if it 's neck mounted you do n't get very good performance . postdoc e: why did n't i you were saying that but i could hear you really on the transcription on the , tape . grad a: but they were complaining about it . because it 's not it does n't go over the ears . phd b: why ? it 's not s it 's not supposed to cover up your ears . postdoc e: so that 's what you 're d he 's got it on his temples so it cuts off his circulation . grad a: , i ' m just that digit - y g sorta guy . so this is adam . postdoc e: now , just to be , the numbers on the back , this is the channel ? postdoc e: good . , this is jane , on mike number five . start ? do i need to say anything more ? grad a: should i turn off the vu meter dan ? do you think that makes any difference ? grad a: the vu meter which tells you what the levels on the various mikes are and there was one hypothesis that perhaps that the act of recording the vu meter was one of the things that contributed to the errors . undergrad d: , but eric , you did n't think that was a reasonable hypothesis , right ? grad a: , the only reason that could be is if the driver has a bug . right ? because the machine just is n't very heavily loaded . phd b: no , we do n't . but we ought to st we ought to standardize . phd b: , i s i spoke to somebody , morgan , about that . i think we should put mar , no , w we can do that . phd b: but i think we should put them in standard positions . we should make little marks on the table top . grad a: which means we need to move this thing , and sorta decide how we 're actually going to do things . phd b: i that 's the point . it 'll be a lot easier if we have a if we have them permanently in place like that . undergrad d: , lila actually is almost getting r pretty close to even getting ready to put out the purchase order . i handed it off to her about a month ago . grad a: topic of this meeting is i wanna talk a little bit about transcription . , i ' ve looked a little bit into commercial transcription services and jane has been working on doing transcription . , and so we wan wanna decide what we 're gon na do with that and then get an update on the electronics , and then , maybe also talk a little bit about some infrastructure and tools , and so on . , eventually we 're probably gon na wanna distribute this thing and we should decide how we 're gon na handle some of these factors . grad a: , so we 're collecting a corpus and it 's gon na be generally useful . , it seems like it 's not a corpus which is , has been done before . and so people will be interested in having it , and so we will grad a: and and so how we do we distribute the transcripts , how do we distribute the audio files , how do we just do all that infrastructure ? phd c: , for that particular issue ther there are known sources where people go to find these things like the ldc . phd b: right . the it 's not so much the actu the logistics of distribution are secondary to preparing the data in a suitable form for distribution . grad a: so , as it is , it 's a ad - hoc combination of dan set and i set up , which we may wanna make a little more formal . phd b: and the other thing is that , university of washington may want to start recording meetings as , in which case w we 'll have to decide what we ' ve actually got so that we can give them a copy . grad a: i was actually thinking i would n't mind spending the summer up there . that would be fun . grad a: , and then also i have a bunch of for doing this digits . so i have a bunch of scripts with x waves , and some perl scripts , and other things that make it really easy to extract out and align where the digits are . and if u d uw 's going to do the same thing it 's worth while for them to do these digits tasks as . grad a: and what i ' ve done is pretty ad - hoc , so we might wanna change it over to something a little more standard . , stm files , or xml , . grad a: they certainly wanna collect more data . and so they 're applying , i b is that right ? something like that . , for some more money to do more data . so we were planning to do like thirty or forty hours worth of meetings . they wanna do an additional hundred or so hours . so , they want a very large data set . , but we 're not gon na do that if we do n't get money . and i would like that just to get a disjoint speaker set and a disjoint room . , one of the things morgan and i were talking about is we 're gon na get to know this room really , phd b: , now you ' ve touched the fan control , now all our data 's gon na be undergrad d: you think that things after the f then this fan 's wired backwards . , this is high speed here . undergrad d: so , do our meetings in the dark with no air conditioning in the future . phd c: it would , it would real really mean that we should do short meetings when you turn off the air conditioning , undergrad d: actually , the a th air the air conditioning 's still working , that 's just an auxiliary fan . phd c: so , in addition to this issue about the uw there was announced today , via the ldc , a corpus from i believe santa barbara . phd c: , of general spoken english . and i exactly how they recorded it but there 's a lot of different styles of speech and what not . postdoc e: i assume so , actually , i had n't thought about that . unless they added close field later on but , i ' ve listened to some of those data and i , i ' ve been i was actually on the advisory board for when they set the project up . grad a: because it would to be able to take that and adapt it to a meeting setting . phd c: , what i was thinking is it may be useful in transcribing , if it 's far field , right ? in doing , some of our first automatic speech recognition models , it may be useful to have that data postdoc e: and and their recording conditions are really clean . , i ' ve heard i ' ve listened to the data . undergrad d: not head mounted ? and so that 's why they 're getting away with just two channels , or are they using multiple dats ? phd c: no , and their web page did n't answer it either . so i ' m , i , was thinking that we should contact them . so it 's that 's a beside - the - point . but . phd c: so , i can actually arrange for it to arrive in short order if we 're grad a: , it 's silly to do unless we 're gon na have someone to work on it , so maybe we need to think about it a little bit . postdoc e: the other thing too is that their jus their transcription format is really and simple in the discourse domain . but they also mentioned that they have it time aligned . , i s i saw that write - up . phd c: maybe we should maybe we should get a copy of it just to see what they did phd c: alright , i 'll do that . i ca n't remember the name of the corpus . it 's corps - s postdoc e: , sp i ' ve been i was really pleased to see that . i knew that they had some funding problems in completing it phd c: and the there 's still more that they 're gon na do like that unless they have funding issues phd c: and then it ma they may not do phase two but from all the web documentation it looked like , " , this is phase one " , whatever that means . postdoc e: super . , that , they 're really respected in the linguistics d side too and the discourse area , and so this is a very good corpus . phd c: but , it would also maybe help be helpful for liz , if she wanted to start working on some discourse issues , looking at some of this data and then , so when she gets here maybe that might be a good thing for her . grad a: actually , that 's another thing i was thinking about is that maybe jane should talk to liz , to see if there are any transcription issues related to discourse that she needs to get marked . phd c: but maybe we should , find some day that liz , liz and andreas seem to be around more often . so maybe we should find a day when they 're gon na be here and morgan 's gon na be here , and we can meet , at least this subgroup . , not necessarily have the u - dub people down . grad a: , i was even thinking that maybe we need to at least ping the u - dub to see grad a: , say " this is what we 're thinking about for our transcription " , if nothing else . so , w shall we move on and talk a little bit about transcription then ? grad a: so since that 's what we 're talking about . what we 're using right now is a tool , from this french group , called " transcriber " that seems to work very . so it has a , useful tcl - tk user interface and , undergrad d: thi - this is the process of converting audio to text ? and this requires humans just like the stp . grad a: yes , right , right . so we 're at this point only looking for word level . so all so what you have to do is just identify a segment of speech in time , and then write down what was said within it , and identify the speaker . and so the things we that we know that i know i want are the text , the start and end , and the speaker . but other people are interested in stress marking . and so jane is doing primary stress , stress marks as . , and then things like repairs , and false starts , and , filled pauses , and all that other , we have to decide how much of that we wanna do . postdoc e: i did include a glo , a certain first pass . my my view on it was when you have a repair then , it seems , we saw , there was this presentation in the one of the speech group meetings about how and liz has done some too on that , that it , that you get it bracketed in terms of like , if it 's parenthetical , which i know that liz has worked on , then y you 'll have different prosodic aspects . and then also if it 's a r if it 's a repair where they 're like what did , then it 's to have a sense of the continuity of the utterance , the start to be to the finish . and , it 's a little bit deceptive if you include the repai the pre - repair part and sometimes or of it 's in the middle . anyway , so what i was doing was bracketing them to indicate that they were repairs which is n't , very time - consuming . undergrad d: i is there already some plan in place for how this gon na be staffed or done ? or is it real is that what we 're talking about here ? grad a: that 's part of the thing we 're talking about . so what we wanted to do was have jane do one meeting 's worth , forty minutes to an hour , and grad a: ten times about , is and so one of the things was to get an estimate of how long it would take , and then also what tools we would use . and so the next decision which has to be made actually pretty soon is how are we gon na do it ? undergrad d: and so you make jane do the first one so then she can decide , we do n't need all this , just the words are fine . postdoc e: i wanna hear about these , we have a g you were s continuing with the transcription conventions for s grad a: r right , so one option is to get linguistics grad students and undergrads to do it . and that 's happened in the past . and that 's probably the right way to do it . , it will require a post pass , people will have to look at it more than once to make that it 's been done correctly , but ca n't imagine that we 're gon na get anything that much better from a commercial one . and the commercial ones i ' m will be much more expensive . grad a: but , that 's what we 're talking about is getting some slaves who need money grad a: i meant joy . and so again , i have to say " are we recording " and then say , morgan has consistently resisted telling me how much money we have . postdoc e: th there is als , really . there is also the o other possibility which is if you can provide not money but instructional experience or some other perks , you can you could get people to , to do it in exchange . undergrad d: , i b but , morgan 's in a bind over this and thing to do is just the field of dreams theory , which is we go ahead as though there will be money at the time that we need the money . and that 's the best we can do . i b to not do anything until we get money is ridiculous . we 're not gon na do any get anything done if we do that . grad a: so at any rate , jane was looking into the possibility of getting students , at is that right ? talking to people about that ? postdoc e: i ' m afraid i have n't made any progress in that front yet . i should ' ve sent email and i have n't yet . undergrad d: i d do so until you actually have a little experience with what this french thing does we do n't even have undergrad d: i ' m . so that 's where you came up with the f the ten x number ? or is that really just a ? postdoc e: i have n't done a s see , i ' ve been at the same time doing a boot strapping in deciding on the transcription conventions that are , and like , how much postdoc e: there 's some interesting human factors problems like , what span of time is it useful to segment the thing into in order to , transcribe it the most quickly . cuz then , you get like if you get a span of five words , that 's easy . but then you have to take the time to mark it . and then there 's the issue of it 's easier to hear it th right the first time if you ' ve marked it at a boundary instead of somewhere in the middle , cuz then the word 's bisected or whatever and so , i ' ve been playing with , different ways of mar cuz i ' m thinking , , if you could get optimal instructions you could cut back on the number of hours it would take . undergrad d: d does this tool you 're using is strictly it does n't do any speech recognition does it ? postdoc e: no , it does n't but what a super tool . it 's a great environment . undergrad d: but but is there anyway to wire a speech recognizer up to it and actually run it through undergrad d: , a couple things . first of all the time marking you 'd get you could get by a tool . undergrad d: , i ' m think about the close caption that you see running by on live news casts . undergrad d: no , i understand . and in a lot of them you see typos and things like that , but it occurs to me that it may be a lot easier to correct things than it is to do things from scratch , no matter how wonderful the tool is . phd c: , but sometimes it 's easier to type out something instead of going through and figuring out which is the right undergrad d: s but again the timing is for fr should be for free . the timing should be grad a: we do n't care about the timing of the words , just of the utterances . phd b: we have n't decided which time we care about , and that 's one of the things that you 're saying , is like you have the option to put in more or less timing data and , be in the absence of more specific instructions , we 're trying to figure out what the most convenient thing to do is . grad a: so what she 's done so far , is more or less breath g not breath groups , phrases , continuous phrases . and so , that 's because you separate when you do an extract , you get a little silence on either end . so that seems to work really . postdoc e: that 's ideal . although i was i , the alternative , which i was experimenting with before i ran out of time , recently was , that , ev if it were like an arbitrary segment of time i t pre - marked cuz it does take time to put those markings in . it 's really the i the interface is wonderful because , the time it takes is you listen to it , and then you press the return key . but then , it 's like , you press the tab key to stop the flow and , the return key to p to put in a marking of the boundary . but , there 's a lag between when you hear it and when you can press the return key so it 's slightly delayed , so then you listen to it a second time and move it over to here . undergrad d: ar but are are those d delays adjustable ? those delays adjustable ? see a lot of people who actually build with human computer interfaces understand that delay , and so when you by the time you click it 'll be right on because it 'll go back in time to put the phd b: but we ' ve got the most channel data . we 'd have to do it from your signal . , we ' ve got a lot of data . grad a: , i the question is how much time will it really save us versus the time to write all the tools to do it . phd b: but the chances are if we 're talking about collecting ten or a hundred hours , which is going to take a hundred or a thousand hours to transcribe grad a: we 're gon na need we 're gon na need ten to a hundred hours to train the tools , and validate the tools the do the d to do all this anyway . postdoc e: i ' m . i wish you had told me wish you 'd told me . phd b: , i it seems like , i . , it 's maybe like a week 's work to get to do something like this . so forty or fifty hours . postdoc e: could you get it so that with so it would detect volume on a channel and insert a marker ? and the format 's really transparent . it 's just a matter of a very c clear it 's xml , is n't it ? it 's very , i looked at the file format and it 's just it has a t a time indication and then something or other , and then an end time or other . phd c: so maybe we could try the following experiment . take the data that you ' ve already transcribed and postdoc e: s i ' m not if it 's that 's much but anyway , enough to work with . phd c: , and throw out the words , but keep the time markings . and then go through , and go through and try and re - transcribe it , given that we had perfect boundary detection . postdoc e: , that 's what i was thinking . i 'd be cheating a little bit g with familiarity effect . phd c: , that 's part of the problem is , is that what we really need is somebody else to come along . phd b: , no , you should do it do it again from scratch and then do it again at the boundaries . so you do the whole thing three times and then we get undergrad d: and then w since we need some statistics do it three more . and so you 'll get down to one point two x by the time you get done . postdoc e: i 'll do that tomorrow . i should have it finished by the end of the day . undergrad d: no , but the fact that she 's did it before just might give a lower bound . that 's all . , which is fine . undergrad d: it 's and if the lower bound is nine x then w it 's a waste of time . postdoc e: , but there 's an extra problem which is that i did n't really keep accurate postdoc e: , it was n't a pure task the first time , so , it 's gon na be an upper bound in that case . and it 's not really strictly comparable . so though it 's a good proposal to be used on a new batch of text that i have n't yet done yet in the same meeting . could use it on the next segment of the text . phd b: the point we where do we get the oracle boundaries from ? or the boundaries . grad a: , one person would have to assign the boundaries and the other person would have to postdoc e: , but the oracle boundaries would come from volume on a partic specific channel would n't they ? phd c: no , no . you wanna know given given a perfect human segmentation , you wanna know how , the question is , is it worth giving you the segmentation ? grad a: , that 's easy enough . i could generate the segmentation and you could do the words , and time yourself on it . grad a: that would at least tell us whether it 's worth spending a week or two trying to get a tool , that will compute the segmentations . undergrad d: and the thing to keep in mind too about this tool , guys is that , you can do the computation for what we 're gon na do in the future but if uw 's talking about doing two , or three , or five times as much and they can use the same tool , then there 's a real multiplier there . postdoc e: and the other thing too is with speaker identification , if that could handle speaker identification that 's a big deal . postdoc e: , that 's a feature . that 's a major that 's like , one of the two things that phd c: , there 's gon na be in the meeting , like the reading group meeting that we had the other day , that 's it 's gon na be a bit of a problem because , like , i was n't wearing a microphone f and there were other people that were n't wearing microphones . phd b: so i need to we need to look at what the final output is but it seems like we it does n't it seems like it 's not really not that hard to have an automatic tool to generate the phrase marks , and the speaker , and speaker identity without putting in the words . grad a: so it would be easy . if you 'd tell me where it is , ? postdoc e: we did n't finish the part of work already completed on this , did we ? , you talked a little bit about the transcription conventions , and , i you ' ve mentioned in your progress report , or status report , that you had written a script to convert it into so , i when i i the it 's quickest for me in terms of the transcription part to say something like , if adam spoke to , to just say , " a colon " , like who could be , at the beginning of the line . and e colon instead of entering the interface for speaker identification and clicking on the thing , indicating the speaker id . so , and then he has a script that will convert it into the thing that , would indicate speaker id . grad a: so so the at ten x seems to be pretty standard . everyone more or less everyone you talk to says about ten times for hard technical transcription . grad a: which is a service that you send an audio file , they do a first - pass speech recognition . and then they do a clean up . but it 's gon na be horrible . they 're never gon na be able to do a meeting like this . grad a: , for cyber transcriber they do n't quote a price . they want you to call and talk . so for other services , they were about thirty dollars an hour . undergrad d: d did you talk to anybody that does closed captioning for , tv ? cuz they a usually at the end of the show they 'll tell what the name of the company is , the captioning company that 's doing it . grad a: it was just a net search . and , so it was only people who have web pages and are doing through that . undergrad d: , the thing about this is thinking , maybe a little more globally than i should here but that really this could be a big contribution we could make . , we ' ve been through the stp thing , we it what it 's like to manage the process , and admittedly they might have been looking for more detail than what we 're looking for here but it was a big hassle , right ? , , they constantly could ' ve reminding people and going over it . and clearly some new needs to be done here . and it 's only our time , where " our " includes dan , dan and you guys . it does n't include me . j just seems like phd b: , i if we 'd be able to do any thing f to help stp type problems . but certainly for this problem we can do a lot better than phd b: because they had because they only had two speakers , right ? , the segmentation problem is grad a: , mostly because they were doing much lower level time . so they were doing phone and syllable transcription , as , word transcription . and so we 're w we decided early on that we were not gon na do that . undergrad d: i see . but there 's still the same issue of managing the process , of reviewing and keeping the files straight , and all this , that which is clearly a hassle . grad a: and so what i ' m saying is that if we hire an external service we can expect three hundred dollars an hour . that 's the ball park . there were several different companies that and the range was very tight for technical documents . twenty - eight to thirty - two dollars an hour . phd c: and who knows if they 're gon na be able to m manage multal multiple channel data ? grad a: sev - several of them say that they 'll do meetings , and conferences , and s and so on . none of them specifically said that they would do speaker id , or speaker change mark . they all just said transcription . undergrad d: th - th the th there may be just multiplier for five people costs twice as much and for ten people co something like that . grad a: , the way it worked is it was scaled . so what they had is , if it 's an easy task it costs twenty - four dollars an hour and it will take maybe five or six times real time . and what they said is for the hardest tasks , bad acoustics , meeting settings , it 's thirty - two dollars an hour and it takes about ten times real time . so that we can count on that being about what they would do . it would probably be a little more because we 're gon na want them to do speaker marking . undergrad d: a lot of companies i ' ve worked for y the , the person leading the meeting , the executive or whatever , would go around the room and mentally calculate h how many dollars per hour this meeting was costing , in university atmosphere you get a little different thing . but , it 's a lot like , " he 's worth fifty an hour , he 's worth " and so he so here we 're thinking , " let 's see , if the meeting goes another hour it 's going to be another thousand dollars . " ? grad a: but at any rate , so we have a ballpark on how much it would cost if we send it out . undergrad d: , i ' m , three hundred . right , i w got an extra factor of three there . phd c: so it 's thirty dollars an hour , essentially , right ? but we can pay a graduate student seven dollars an hour . and the question is what 's the difference phd c: or ei eight dollars . what do what the going rate is ? it 's it 's on the order of eight to ten . phd c: , i what the standard but there is a standard pay scale what it is . phd c: so that means that even if it takes them thirty times real time it 's cheaper to do graduate students . grad a: , that 's why i said originally , that i could n't imagine sending it out 's gon na be cheaper . postdoc e: the other thing too is that , if they were linguistics they 'd be , in terms of like the post editing , i tu content wise they might be easier to handle cuz they might get it more right the first time . grad a: and also we would have control of , we could give them feedback . whereas if we do a service it 's gon na be limited amount . grad a: , we ca n't tell them , " for this meeting we really wanna mark stress grad a: and for this meeting we want " and and they 're not gon na provide stress , they 're not gon na re provide repairs , they 're not gon na provide they may or may not provide speaker id . so that we would have to do our own tools to do that . undergrad d: just hypoth hypothetically assuming that we go ahead and ended up using graduate students . i who 's the person in charge ? who 's gon na be the steve here ? postdoc e: , interesting . , now would this involve some manner of , monetary compensation or would i be the voluntary , coordinator of multiple transcribers for checking ? grad a: , i would imagine there would be some monetary involved but we 'd have to talk to morgan about it . undergrad d: , it just means you have to stop working for dave . see ? that 's why dave should have been here . grad a: , i would like you to do it because you have a lot more experience than i do , but if that 's not feasible , i will do it with you as an advisor . undergrad d: w we 'd like you to do it and we 'd like to pay you . postdoc e: boy , if i wanted to increase my income i could start doing the transcribing again . undergrad d: an and be and say , would you like fries with that when you 're thinking about your pay scale . postdoc e: , no , that i would be interested in that in becoming involved in the project in some aspect like that phd b: what s so what are you so you ' ve done some portion of the first meeting . and what 's your plan ? postdoc e: what , what was right now we have p so i gave him the proposal for the transcription conventions . he made his , suggestion of improvement . the the it 's a good suggestion . so as far as i ' m concerned those transcription conventions are fixed right now . and so my next plan would be postdoc e: they 're very minimal . so , it would be good to just to summarize that . so , one of them is the idea of how to indicate speaker change , and this is a way which meshes with , making it so that , , on the at the boy , it 's such a interface . when you when you get the , you get the speech signal you also get down beneath it , an indication of , if you have two speakers overlapping in a s in a single segment , you see them one displayed one above each other . and then at the same time the top s part of the screen is the actual verbatim thing . you can clip click on individual utterances and it 'll take you immediately to that part of the speech signal , and play it for you . and you can , you can work pretty between those two these two things . grad a: , the user interface only allows two . and so if you 're using their interface to specify overlapping speakers you can only do two . but my script can handle any . and their save format can handle any . and so , using this the convention that jane and i have discussed , you can have as many overlapping speakers as you want . undergrad d: do y is this a , university project ? th - this is the french software , right ? grad a: multichannels was also , they said they wanted to do it but that the code is really very organized around single channels . so that 's n unlikely to ha happen . undergrad d: do - do what they 're using it for ? why 'd they develop it ? phd c: they 're they have some connection to the ldc cuz the ldc has been advising them on this process , the linguistic data consortium . so but a apart from that . grad a: it 's also all the source is available . if you if you speak tcltk . grad a: and they have they ' ve actually asked if we are willing to do any development and i said , maybe . so if we want if we did something like programmed in a delay , which actually is a great idea , i ' m they would want that incorporated back in . postdoc e: pre - lay . , and they ' ve thought about things . , they do have so you have when you play it back , it 's it is useful to have , a break mark to se segment it . but it would n't be strictly necessary cuz you can use the , the tabbed key to toggle the sound on and off . , it 'll stop the s speech if you press a tab . and , . and so , that 's a feature . and then also once you ' ve put a break in then you have the option of cycling through the unit . you could do it like multiply until you get crazy and decide to stop cycling through that unit . undergrad d: loop it ? yo - you n , there 's al also the user interface that 's missing . undergrad d: it 's missing from all of our offices , and that is some analog input for something like this . it 's what audio people actually use . it 's something that wh when you move your hand further , the sound goes faster past it , like fast forward . , like a joy stick or a , you could wire a mouse or trackball to do something like that . undergrad d: no , but i ' m saying if this is what professionals who actually do this thing for m for video or for audio where you need to do this , and so you get very good at jostling back and forth , rather than hitting tab , and backspace , and carriage return , and enter , and things like that . grad a: , we talked about things like foot pedals and other analog so , tho those are things we could do how much it 's worth doing . we 're just gon na have postdoc e: i agree . they they have several options . so , , i mentioned the looping option . another option is it 'll pause when it reaches the end of the boundary . and then to get to the next boundary you just press tab and it goes on to the next unit . , it 's very nicely thought out . they thought about and also it 'll go around the c the , i wanna say cursor but i ' m not if that 's the right thing . postdoc e: anyway , you can so they thought about different ways of having windows that you c work within , and but so in terms of the con the conventions , then , , it 's strictly orthographic which means with some w provisions for , w , colloquial forms . so if a person said , " cuz " instead of " because " then i put a an apostrophe at the beginning of the word and then in double ang angle brackets what the full lexical item would be . and this could be something that was handled by a table but to have a convention marking it as a non - standard or wha i do n't mean standard but a non , ortho orthographic , whatever . postdoc e: gon na or " wanna " , the same thing . and and there would be limits to how much refinement you want in indicating something as non - standard pres pronunciation . postdoc e: , yes , there was some in my view , when i when you ' ve got it densely overlapping , i did n't worry about s specific start times . postdoc e: that this is not gon na be easily processed anyway and maybe i should n't spend too much time getting exactly when the person said " no " , or , , i " immediate " . and instead just rendered " within this time slot , there were two people speaking during part of it and if you want more detail , figure it out for yourself " , grad a: , what w what eric was talking about was channels other than the direct speech , phd c: , what is wh , when somebody says " - " in the middle of , a @ phd c: , cuz i was listening to dan was agreeing a lot to things that you were saying as you were talking . postdoc e: appreciate it . , if it if there was a word like " right " , then i wou i would indicate that it happened within the same tem time frame phd b: i transcribed a minute of this and there was a lot of overlapping . it was grad a: , when no one i when we 're not actually in the meeting , and we 're all separated , and doing things . but even during the meeting there 's a lot of overlap but it 's marked pretty clearly . , some of the backchannel jane had some comments and but a lot of them were because you were at the meeting . and so that often you ca n't tell . postdoc e: only when it was otherwise gon na be puzzling because he was in the other room talking . grad a: , but someone who , was just the transcriber would n't have known that . or when dan said , " i wa i was n't talking to you " . postdoc e: he was so he was checking the meter levels and we were handling things while he was labeling the whatever it was , the pda ? and and so he was in you were talking so i was saying , like " and i could label this one left . right ? " and he and he said , " i do n't see anything " . and he said , " i was n't talking to you " . or it was n't it did n't sound quite that rude . but really , no , w in the context if he ca n't hear what he 's saying grad a: what it what happens is if you 're a transcriber listening to it sounds like dan is just being a total impolite . postdoc e: , you 'll see . you can listen to it . , it was you who was . no , but you were asking off the wall questions . grad a: but but if you knew that i was n't actually in the room , and that dan was n't talking to me , it became ok . postdoc e: and that 's w that 's where i added comments . the rest of the time i did n't bother with who was talking to who but this was unusual circum circumstance . postdoc e: and part of it was funny , reason was because it was a mixed signal so you could n't get any clues from volume that , he was really far away from this conversation . grad a: i should rewrite the mix tool to put half the people in one channel and half in the other . i have a auto - gain - mixer tool that mixes all the head mounted microphones into one signal postdoc e: but it would be , i did n't wanna add more contextual comments than were needed but that , it seemed to me , clarified that the con what was going on . and , phd c: i was just gon na ask , so wanted to c finish off the question i had about backchannels , if that 's ok , which was , so say somebody 's talking for a while and somebody goes " - " in the middle of it , and what not , does the conversation come out from the or the person who 's speaking for the long time as one segment and then there 's this little tiny segment of this other speaker or does it does the fact that there 's a backchannel split the it in two . postdoc e: ok , my focus was to try and maintain conten con content continuity and , to keep it within what he was saying . like i would n't say breath groups but prosodic or intonational groups as much as possible . so if someone said " - " in the middle of a of someone 's , intonational contour , i indicated it as , like what you just did . then i indicated it as a segment which contained @ this utterance plus an overlap . phd b: but that 's but there 's only one time boundary for both speakers , right ? undergrad d: whenever we use these speech words we should always do the thing like you 're talking about , accent , postdoc e: and then " hesitation " . and so then , in terms of like words like " and " wrote them because i figured there 's a limited number , and i keep them to a , limited set because it did n't matter if it was " mmm " or " , versus " . so always wrote it as u m . and " - " , " uhuh . " , like a s set of like five . but in any case i did n't mark those . postdoc e: i 'd be happy with that . that 'd be fine . it 'd be good to have that in the conventions , what 's to be used . grad a: i did notice that there were some segments that had pauses on the beginning and end . we should probably mark areas that have no speakers as no speaker . then , so question mark colon is fine for that . grad a: , i wanna leave the marked i do n't want them to be part of another utterance . so you just you need to have the boundary at the start and the end . postdoc e: now that 's refinement that , maybe it could be handled by part of the script more phd b: it seems like the , tran the transcription problem would be very different if we had these automatic speaker detection turn placing things . because suddenly , i , actually it sounds like there might be a problem putting it into the software if the software only handles two parallel channels . but assuming we can get around that somehow . grad a: it can read and write as many as you want , it 's just that it phd b: but what if you wanna edit it ? right ? , we 're gon na generate this transcript with five tracks in it , but with no words . someone 's gon na have to go in and type in the words . and if there are five people speaking at once , grad a: right , i it 's i did n't explain it . if we use the little the conventions that jane has established , i have a script that will convert from that convention to their saved convention . postdoc e: which allows five . and it can be m edited after the fact , ca n't it also ? but their but their format , if you wanted to in indicate the speakers right there instead of doing it through this indirect route , then i they a c window comes up and it only allows you to enter two speakers . undergrad d: but you 're saying that by the time you call it back in to from their saved format it opens up a window with five speakers ? grad a: the whole saved form the saved format and the internal format , all that , handles multiple speakers . grad a: it 's just there 's no user interface for specifying multiple any more than two . undergrad d: cuz we 're always gon na wanna go through this preprocessing , assuming it works . postdoc e: and that works nicely cuz this so quick to enter . so i would n't wanna do it through the interface anyway adding which worry who the speaker was . and then , let 's see what else . i wanted to have so sometimes a pers i in terms of like the continuity of thought for transcriptions , it 's i it is n't just words coming out , it 's like there 's some purpose for an utterance . and sometimes someone will do a backchannel in the middle of it but you wanna show that it 's continued at a later point . so i have a convention of putting like a dash arrow just to indicate that this person 's utterance continues . and then when it , catches back up again then there 's an arrow dash , and then you have the opposite direction to indicate continuation of ones own utterance versus , sometimes we had the situation which is , which you get in conversations , of someone continuing someone else 's utterance , and in that case i did a tilde arrow versus a arrow tilde , to indicate that it was continuation but it was n't , i did equal arrow for the own for yourself things cuz it 's the speakers the same . and then tilde arrow if it was a different if a different speaker , con continuation . but just , the arrows showing continuation of a thought . and then you could track whether it was the same speaker or not by knowing , at the end of this unit you 'd happened later . and that was like this person continued and you 'd be able to look for the continuation . phd b: but the only time that becomes ambiguous is if you have two speakers . like , if you if you only have one person , if you only have one thought that 's continuing across a particular time boundary , you just need one arrow at each end , and if it 's picked up by a different speaker , it 's picked up by a different speaker . the time it becomes ambiguous if you have more than one speaker and that and they swap . i if you have more than one thread going , then you need to know whether they were swapped or not . grad a: especially for meetings . , if i if you were just recording someone 's day , it would be impossible . , grad a: if you were trying to do a remembrance agent . but for meetings it 's probably alright . but , a lot of these issues , that for , from my point of view , where wanna do speech recognition and information retrieval , it does n't really matter . but other people have other interests . phd b: i know . but it does feel like it 's really in there . i did this transcription and i marked that , i marked it with ellipsis because it seemed like there was a difference . it 's something you wanted to indicate that it that i this was the end of the phrase , this was the end of that particular transcript , but it was continued later . and i picked up with an ellipsis . postdoc e: that 's , i that 's why i did n't do it n , that 's why about it , and re - ev and it did n't do i did n't do it in ten times the time . grad a: , so anyway , are we interested then in writing tools to try to generate any of this automatically ? is that something you want to do , dan ? postdoc e: i also wanted to ask you if you have a time estimate on the part that you transcribed . do you have a sense of how long phd b: , it took me half an hour to transcribe a minute , but i did n't have any i did n't even have a i was trying to get transcriber to run but i could n't . so i was doing it by typ typing into a text file and trying to fit it was horrible . undergrad d: so thirty to one 's what you got ? so that 's a new upper limit ? phd c: so so if we hired a who if we hired a whole bunch of dan 's undergrad d: d does n't it beep in the other room when you 're out of disk space ? phd c: is there maybe we should s consider also , starting to build up a web site around all of these things . phd c: i want to introduce the word " snot - head " into the conversation at this point . phd c: alright , see here 's my thought behind it which is that , the that you ' ve been describing , jane , i gu one has to , indicate , i is very interesting , i 'd like to be able to pore through , the types of tr conventions that you ' ve come up with and like that . so i would like to see that on the web . postdoc e: now , w the alternative to a web site would be to put it in doctor speech . cuz cuz what i have is a soft link to my transcription that i have on my account postdoc e: web site 's . then you have to t you have to do an ht access . phd b: we could actually maybe we could use the tcl plug - in . , man . undergrad d: but he does such a good job of it . he should be allowed to , w do it . undergrad d: if you just did a crappy job , no nobody would want you to do it . phd b: i sh i should n't be allowed to by m by my own by my according to my own priorities . alright . let 's look at it anyway . so definitely we should have some access to the data . grad a: and we have quite a disparate number of web and other sorts of documents on this project spread around . i have several and dan has a few , grad a: , i ' m talking about putting together all the data in a form that is legible , and pleasant to read , and up to date , and et cetera , et cetera . undergrad d: but , is it against the law to actually use a tool to help your job go easier ? grad a: it 's it 's against the law to use a tool . i have n't found any tools that i like . grad a: it 's just as easy to use to edit the raw html as anything else . undergrad d: no , it 's true that he has n't found any he likes . the question is what 's he looked at . grad a: now , i if i were doing more powerful excuse me more complex web sites i might want to . postdoc e: , would this be to document it also for outside people or mainly for in house use ? undergrad d: , you 're leaving . people at uw wanna look at it . , it 's internal until grad a: send me links and i wi send me pointers , rather , and i 'll put it together . phd b: i ' m not how important that distinction is . i do n't think we should say , " , it 's internal therefore we do n't have to make it very good " . , you can say " , it 's internal therefore we can put data in it that we do n't have to worry about releasing " . but to try and be coherent and make it a presentation . undergrad d: i was looking for the actual box i plan to use , but i c all i could i could n't find it at the local store . but this is the technology . it 's actually a little bit thinner than this . and it 's two by two , by one , and it would fit right under the right under th the lip , undergrad d: there 's a lip in these tables . and , it oc i p especially brought the bottom along to try and generate some frequencies that you may not already have recorded . undergrad d: let 's see what it does to the but this was the just to review , and i also brought this along rather than the projector so we can put these on the table , and w push them around . undergrad d: that that 's the six tables that we 're looking at . these six tables here , with little boxes , in the middle here . which es would , the boxes are out of the way anyway . i 'll - i 'll show you the cro this is the table cross section . i if people realize what they 're looking at . undergrad d: why not ? , cuz this is what 's gon na happen . you got plenty of data . i wo n't come to your next meeting . and and you so this is the box 's undergrad d: or not to be . , the box , there 's a half inch lip here . the box is an inch thick so it hangs down a half an inch . and so the two head set jacks would be in the front and then the little led to indicate that box is live . the the important issue about the led is the fact that we 're talking about eight of these total , which would be sixteen channels . and , even though we have sixteen channels back at the capture , they 're not all gon na be used for this . so there 'd be a subset of them used for j just use the ones at this end for this many . so excuse me . you 'd like a way to tell whether your box is live , so the led would n't be on . undergrad d: so if you 're plugged in it does n't work and the led is off that 's a tip off . and then the , would wire the all of the cables in a bundle come through here and o collect these cables at the same time . , so this notion of putting down the p z ms and taking them away would somehow have to be turned into leaving them on the table undergrad d: or and so the you we just epoxy them down . big screw into the table . undergrad d: and even though there 's eight cables they 're not really very big around so my model is to get a p piece of undergrad d: , that people put with the little you slip the wires into that 's shaped like that cross section . undergrad d: i ' m r a i ' m going up and then i ' m going down . phd b: , because then you do n't have to just have one each . so that if t if you have two people sitting next to each other they can actually go into the same box . undergrad d: and to see , thi this is really the way people sit on this table . th dot , dot . undergrad d: that 's the way people sit . that 's how many chairs are in the room . undergrad d: , true enough . and actually , at the m my plan is to only bring eight wires out of this box . undergrad d: and , it 's function is to s to , essentially a wire converter to go from these little blue wires to these black wires , plus supply power to the microphones cuz the he the , cheap head mounteds all require low voltage . grad a: so so you 'd imagine some in some patch panel on top to figure out what the mapping was between each d of these two and each of those one or what ? undergrad d: i w i the simplest thing i could imagine , i which is really , really simple is to quite literally that these things plug in . and there 's a plug on the end of each of these , ei eight cables . undergrad d: an - and there 's only four slots that are , in the first version or the version we 're planning to build . so that was the whole issue with the led , that you plug it in , the led comes on , and you 're live . undergrad d: now the subtle issue here is that tha i have n't really figured out a solution for this . so , we it 'll have to be convention . what happens if somebody unplugs this because they plug in more of something else ? the there 's no clever way to let the up stream guys know that you 're really not being powered . th there will be a certain amount of looking at cables has to be done if people , rewire things . phd b: , we i had that last time . but there are actually that , there 's an extra there 's a mix out on the radio receiver ? so there are actually six xlr outs on the back of the radio receiver and only five cables going in , i had the wrong five , so i ended up not recording one of the channels and recording the mix . undergrad d: how interesting . d did you do any recognition on the mix out ? wonder whether it works any phd b: but i subtracted the four that i did have from the mix and got a pretty good approximation of the @ . undergrad d: i was wrestling with th with literally the w number of connectors in the cable and the , powering system . and i was gon na do this very clever phantom power and i decided a couple days ago not to do it . so i ' m ready to build it . which is to say , the neighborhood of a week to get the circuit board done . grad a: so the other thing i 'd like to do is , do something about the set up grad a: so that it 's a little more presentable and organized . and i ' m just not what that is . , some cabinet . undergrad d: build a cabinet . the the difficulty for this project is the intellectual capital to design the cabinet . in other words , to figure out ex exactly what the right thing is . that cabinet can go away . we can use that for kindling . but if you can imagine what the right form factor is . dan - dan and i have gone around on this , and we were thinking about something that opened up in the top to allow access to the mixer . but there 's these things sticking out of the mixer which are a pain , so you end up with this thing that if you stuck the mixer up here and the top opened , it 'd be fine . you would n't necessarily you s understand what i ' m undergrad d: the you can start s sketching it out , and certainly build it out of oak no problem , would it , arb , arbitrarily amount of grad a: i need a desk at home too , alright ? is that gon na be a better solution than just going out and buy one ? undergrad d: , the as we found out with the thing that , jeff bought a long time ago to hold our stereo system the you buy is total crap . and this is something you buy . undergrad d: it 's total crap . , it 's useless for this function . works fine for holding a kleenex , grad a: so , i g i it 's just a question , is that something you wanna spend your time on ? undergrad d: i have no problem . no , but w certainly one of the issues is the , is security . , we ' ve been lax and lucky . undergrad d: really lucky with these things . but they 're not ours , so the , the flat panels . grad a: i ' m telling you , i ' m just gon na cart one of them away if they stay there much longer . grad a: j , then the other question is do we wanna try to do a user interface that 's available out here ? grad a: a user interface . , do we wanna try to get a monitor ? or just something . undergrad d: how about use the thing that aciri 's doing . which is to say just laptop with a wireless . undergrad d: n no , i ' m serious . does does the wireless thing work on your phd b: no , no , that 's the right way to do it . t to have it , just undergrad d: it 's very convenient especially if dan happens to be sitting at that end of the table to not have to run down here and look in the thing every so often , phd b: and given that we ' ve got a wireless that we ' ve got a we got the field . undergrad d: but just have the it 's right there . the antenna 's right there , right outside the y , we need to clear this with aciri but , how tough can that be ? there it you 'd all you need 's web access , is n't it ? grad a: right , so it 's just a question of getting a laptop and a wireless modem . undergrad d: no , and he had , reque @ my proposal is you have a laptop . undergrad d: you do n't ? if if we bought you the thing would you mind using it with i the phd b: i would love to but i ' m not if my laptop is compatible with the wave lan thing they 're using . phd b: what you it just plug plugs in a pc card , so you could probably make it run with that , but . phd b: , i ' m . i imagine there is . but anyway there are abs there are a bunch of machines at icsi that have those cards phd b: and so if w if it does n't we should be able to find a machine that does that . i know that does n't do n't the important people have those little blue vaios that undergrad d: , b that to me that 's a whole nother . that 's a whole nother issue . the the idea of con convincing them that we should use their network i is fairly straight forward . the idea of being able to walk into their office and say , " , can i borrow your machine for a while " , is a non - starter . that i do n't think that 's gon na work . so , either we figure out how to use a machine somebody already in the group already owns , a and the idea is that if it 's it perk , it 's an advantage not a disadvan or else we literally buy a machine e exactly for that purpose . certainly it solves a lot of the problems with leaving a monitor out here all the time . i ' m not a big fan of doing things to the room that make the room less attractive for other people , which is part of the reason for getting all this out of the way and , so a monitor sitting here all the time people are gon na walk up to it and go , " how come i ca n't get , pong on this " or , whatev grad a: i ' ve borrowed the iram vaio sony thingy , and i do n't think they 're ever gon na want it back . undergrad d: , the certainly , u you should give it a shot first see whether you can get compatible . , ask them what it costs . ask them if they have an extra one . who knows , they might have an extra hardware s undergrad d: , the , tsk . it 's gon na be hooked up to all sorts of junk . there 's gon na be actually a plug at the front that 'll connect to people 's laptops so you can walk in and plug it in . and it 's gon na be con connected to the machine at the back . so we certainly could use that as a constant reminder of what the vu meters are doing . undergrad d: so people sitting here are going " testing , one , two , three " ! undergrad d: but but the idea of having a control panel it 's that 's there in front of you is really . undergrad d: as long as you d as l as long as you 're not tempted to sit there and f keep fiddling with the volume controls going , " can you talk a bit louder ? " grad a: i had actually earlier asked if i could borrow one of the cards to do wireless and they said , " , whenever you want " . so it wo n't be a problem . grad a: right , and if his does n't work , as i said , we can use the pc . undergrad d: i it 'll work it 'll work the first time . i trust steve jobs . grad a: so jim is gon na be doing wiring and you 're gon na give some thought to cabinets ? phd b: this so again , washington wants to equip a system . our system , we spent ten thousand dollars on equipment not including the pc . however , seven and a half thousand of that was the wireless mikes . using these undergrad d: and it and the f the five thousand for the wires , so if i ' m gon na do no . it 's a joke . phd b: but we have n't spent that , right ? but once we ' ve done the intellectual part of these , we can just knock them out , right ? phd b: and then we could washington could have a system that did n't have any wireless but would had what 's based on these and it would cost phd b: pc and two thousand dollars for the a - to - d . and that 's about cuz you would n't even need the mixer if you did n't have the th the p z ms cost a lot . but anyway you 'd save , on the seven or eight thousand for the wireless system . so actually that might be attractive . ok , move my thumb now . postdoc e: that 's a great idea . it 's it 's to be thinking toward that . grad a: , actually shorten there 's a speech compression program that works great on things like this , cuz if the dynamic range is low it encodes it with fewer bits . and so most of the time no one 's talking so it shortens it dramatically . but if you talk quieter , the dynamic range is lower and it will compress better . grad a: how do you spell that ? can you do one more round of digits ? are we done talking ? undergrad d: but you there 's a problem a structural problem with this though . you really need an incentive at the end if you 're gon na do digits again . like , candy bars , undergrad d: or both . eric , you and i win . we did n't make any mistakes . grad a: it 's it 's only a hard time for the transcriber not for the speech recognizer . undergrad d: very good . so eric , you win . but the other thing is that there 's a colon for transcripts . and there should n't be a colon . because see , everything else is you fill in . phd b: they start , six , seven , eight , nine , zero , one , two , three , four , five , six , eight , nine . phd b: and they 're in order because they 're sorted lexically by the file names , which are have the numbers in digits . and so they 're actually this is like all the all utterances that were generated by speaker mpj . and then within mpj they 're sorted by what he actually said . phd b: it does n't matter ! it 's like cuz you said " six , seven , eight " . undergrad d: but the real question i have is that , why bother with these ? why do n't you just ask people to repeat numbers they already know ? like phone numbers , social security numbers . undergrad d: , so you just say your credit card numbers , say your phone numbers , say your mother 's maiden name . grad a: actually , this i got this directly from another training set , from aurora . we can compare directly . phd b: there were no there were no direct driver errors , by the look of it , which is good . ###summary: the berkeley meeting recorder group discussed the aims , methods , timing , and outsourcing issues concerning transcription of the meeting recorder corpus. the transcriber software tool was introduced , along with a set of transcription conventions for coding different speech events. the prospect of sending the data to an external transcription service was weighed against that of hiring a graduate student transcriber pool. it was tentatively decided that the latter option would be less costly and allow bmr to maintain greater control over the transcription process. methods for distributing the data were briefly discussed , along with an initiative for creating a bmr project website. the group received an update on the meeting room recording setup and electronics. the group will obtain a copy of the corpus of spoken american english ( csae ) from the ldc to compare the methods and conventions used by uc santa barbara with those being considered for the meeting recorder project. speakers fe008 and me011 will experiment with pre-segmentation procedures in hopes of facilitating the transcription process. speaker fe008 will perform the transcription for one meeting ( 40-60 minutes of data ) as a pilot project. a tentative decision was made to put speaker fe008 in charge of the in-house transcription effort. modifications to the transcriber source code , e.g . adjusting the delay between the audio play function and the inputting of time boundaries , may be undertaken by the bmr group. an initiative for creating an internal bmr project website was discussed , along with ideas for providing web access to external organizations , such as the university of washington. a cabinet will be built to house wires and other electronic equipment used in the recording setup. a laptop and wireless modem will be avaiable to participants for monitoring the recording progress. how should transcripts and corresponding audio files be formatted and distributed? external transcription services are expensive , difficult to monitor , and are unlikely to be able to handle multi-channel data. it is unclear which levels of transcription should be encoded. what size of segment is most useful for doing transcriptions? which level of segmentation is most suitable for the aims of the project? transcriber's interface only allows the user to view two overlapping speakers. decisions must be made regarding the security of electronic recording equipment. efforts are in progress to collect and transcribe 30-40 hours of meeting recorder data. the transcriber tool is being used to do word-level transcriptions , and was reported to work well , when used with supplementary scripts , for specifying multiple speakers. other levels of transcription being considered include marking stress , repairs , and false starts. a set of transcription conventions have been formulated for marking colloquial forms , the continuity of utterances , etc. a cost assessment was made for sending meeting recorder data to an external transcription service. it was agreed that hiring linguistics graduate students would be cheaper and allow the group to maintain greater control over the transcription process. tentative pre-segmentation efforts will enable the automatic generation of phrase boundaries and speaker identity coding , and will be extendable for use at uw. perl and xwaves scripts are available for extracting and aligning digits data. other suggestions for future work included performing multi-channel speech/non-speech detection , and linking an asr system to the transcriber tool. the recording room electronics setup has been diagrammed and will take approximately one week to configure. it was suggested that such efforts will enable researchers at uw and other collaborating institutions to create their own recording setups more cheaply.
5
grad e: two items , which was , digits and possibly on , forced alignment , which jane said that liz and andreas had in information on , but they did n't , phd f: we should do that second , because liz might join us in time for that . professor b: ok , so there 's digits , alignments , and , i the other thing , which i came unprepared for , is , to dis s see if there 's anything anybody wants to discuss about the saturday meeting . professor b: with with whatever it was , a month and a half ahead of time , the only time we could find in common roughly in common , was on a saturday . ugh . postdoc c: have we thought about having a conference call to include him in more of the meeting ? , i , if we had the telephone on the table phd f: no , actually i have to shuttle kids from various places to various other places . phd f: so . and i do n't have and i do n't , have a cell phone professor b: so we have to equip him with a with a head - mounted , cell phone grad e: ye - we and we 'd have to force you to read lots and lots of digits , phd f: i 'll let i 'd let i let , my five - year - old have a try at the digits , grad e: so , anyway , talk about digits . , did everyone get the results or shall i go over them again ? that it was the only thing that was even slightly surprising was that the lapel did so . , and in retrospect that 's not as surprising as maybe i it should n't have been as surprising as i felt it was . the lapel mike is a very high - quality microphone . and as morgan pointed out , that there are actually some advantages to it in terms of breath noises and clothes rustling if no one else is talking . professor b: , it 's , the bre the breath noises and the mouth clicks and like that , the lapel 's gon na be better on . professor b: the lapel is typically worse on the on clothes rustling , but if no one 's rustling their clothes , grad e: right . , a lot of people are just leaning over and reading the digits , grad g: probably the fact that it picks up other people 's speakers other people 's talking is an indication of that it the fact it is a good microphone . professor b: right . so in the digits , in most cases , there were n't other people talking . phd f: because i suppose you could make some that have that you have to orient towards your mouth , professor b: they 're they 're intended to be omni - directional . and th it 's and because you how people are gon na put them on , . grad e: so , also , andreas , on that one the back part of it should be right against your head . and that will he keep it from flopping aro up and down as much . professor b: . , we actually talked about this in the , front - end meeting this morning , too . much the same thing , professor b: and it was , there the point of interest to the group was primarily that , the system that we had that was based on h t k , that 's used by , all the participants in aurora , was so much worse than the s r professor b: and the interesting thing is that even though , yes , it 's a digits task and that 's a relatively small number of words and there 's a bunch of digits that you train on , it 's just not as good as having a l very large amount of data and training up a good big hmm . , also you had the adaptation in the sri system , which we did n't have in this . so . phd f: , i did , actually . so there was a significant loss from not doing the adaptation . a a couple percent or some , i it overall , i do n't remember , but there was { nonvocalsound } there was a significant , loss or win from adaptation with adaptation . and , that was the phone - loop adaptation . and then there was a very small like point one percent on the natives , win from doing , , adaptation to the recognition hypotheses . and i tried both means adaptation and means and variances , and the variances added another or subtracted another point one percent . so , it 's , that 's the number there . point six , i believe , is what you get with both , means and variance adaptation . professor b: but one thing is that , i would presume hav - have you ever t have you ever tried this exact same recognizer out on the actual ti - digits test set ? phd f: i could and they are using a system that 's , h is actually trained on digits , but h otherwise uses the same , decoder , the same , training methods , and , professor b: , bu although i 'd be it 'd be interesting to just take this exact actual system so that these numbers were comparable and try it out on ti - digits . professor b: , cuz we were getting sub one percent numbers on ti - digits also with the tandem thing . so , one so there were a number of things we noted from this . professor b: one is , the sri system is a lot better than the htk this , very limited training htk system . , but the other is that , the digits recorded here in this room with these close mikes , i , are actually a lot harder than the studio - recording ti - digits . , one reason for that , might be that there 's still even though it 's close - talking , there still is some noise and some room acoustics . and another might be that , i 'd i would presume that in the studio , situation recording read speech that if somebody did something a little funny or n pronounced something a little funny or made a little that they did n't include it , grad e: whereas , i took out the ones that i noticed that were blatant that were correctable . grad e: so that , if someone just read the wrong digit , i corrected it . and then there was another one where jose could n't tell whether i could n't tell whether he was saying zero or six . and i asked him and he could n't tell either . so cut it out . , so e edited out the first , i , word of the utterance . , so there 's a little bit of correction but it 's definitely not as clean as ti - digits . so my expectations is ti - digits would , especially ti - digits is all american english . right ? so it would probably do even a little better still on the sri system , but we could give it a try . phd f: but remember , we 're using a telephone bandwidth front - end here , on this sri system , so , i was that maybe that 's actually a good thing because it gets rid of some of the , the noises , , in the below and above the , speech bandwidth and , i suspect that to get the last bit out of these higher - quality recordings you would have to , use models that , were trained on wider - band data . and we ca n't do that or grad e: , that 's right . i did look that up . i could n't remember whether that was ti - digits or one of the other digit tasks . phd f: right . but but , i would it 's it 's easy enough to try , just run it on phd f: one issue with that is that , the system has this , notion of a speaker to which is used in adaptation , variance norm , both in , mean and variance normalization and also in the vtl estimation . so phd f: do y ? is ? so does so th so does , the ti - digits database have speakers that are known ? and is there enough data or a comparable amount of data to what we have in our recordings here ? grad e: that i . i how many speakers there are , and how many speakers per utterance . professor b: , the other thing would be to do it without the adaptation and compare to these numbers without the adaptation . that would phd f: right . , but i ' m not so much worried about the adaptation , actually , than the , vtl estimation . if you have only one utterance per speaker you might actually screw up on estimating the warping , factor . so , phd f: right . but it 's not the amount of speakers , it 's the num it 's the amount of data per speaker . grad e: right . so we could probably do an extraction that was roughly equivalent . so , although i know how to run it , there are a little a f few details here and there that i 'll have to dig out . phd f: the key so th the system actually extracts the speaker id from the waveform names . phd f: and there 's a script and that is actually all in one script . so there 's this one script that parses waveform names and extracts things like the , speaker , id that can stand in as a speaker id . so , we might have to modify that script to recognize the , speakers , in the , , ti - digits database . phd f: or you can fake names for these waveforms that resemble the names that we use here for the meetings . that would be the , probably the safest way to do grad e: i might have to do that anyway to do because we may have to do an extract to get the amount of data per speaker about right . the other thing is , is n't ti - digits isolated digits ? or is that another one ? i ' m i looked through a bunch of the digits t corp corpora , and now they 're all blurring . cuz one of them was literally people reading a single digit . and then others were connected digits . professor b: . most of ti - digits is connected digits , . the , we had a bellcore corpus that we were using . it was that 's that was isolated digits . phd f: , we can improve these numbers if we care to compr improve them by , not starting with the switchboard models but by taking the switchboard models and doing supervised adaptation on a small amount of digit data collected in this setting . because that would adapt your models to the room acoustics and f for the far - field microphones , to the noise . and that should really improve things , further . and then you use those adapted models , which are not speaker adapted but acous , channel adapted professor b: . but , w when you it depends whether you 're ju were just using this as a starter task for , to get things going for conversational or if we 're really interested i in connected digits . and the answer is both . and for connected digits over the telephone you do n't actually want to put a whole lot of effort into adaptation professor b: because somebody gets on the phone and says a number and then you just want it . you do n't , phd f: , but , i , my impression was that you were actually interested in the far - field microphone , problem , you want to that 's the obvious thing to try . right ? then , because you do n't have any that 's where the most m acoustic mismatch is between the currently used models and the r the set up here . professor b: . so that 'd be anoth another interesting data point . , i ' m saying i if we 'd want to do that as the as postdoc c: if you have a strong fe if you have a strong preference , you could use this . postdoc c: it 's just we think it has some spikes . so , we did n't use that one . phd f: ca n't quite seem to , this contraption around your head is not working so . professor b: too many adju too many adjustments . anyway , what i was saying is that i probably would n't want to see that as like the norm , that we compared all things to . professor b: to , the to have all this ad all this , adaptation . but it 's an important data point , if you 're if the other thing that , what barry was looking at was just that , the near versus far . and , the adaptation would get th some of that . but , even if there was , only a factor of two , like i was saying in the email , that 's a big factor . n grad e: liz , you could also just use the other mike if you 're having problems with that one . postdoc c: . this would be ok . we we think that this has spikes on it , phd f: , mine are too . e th everybody 's ears are too big for these things . phd a: no , my but this is too big for my head . so , it does n't , it 's sit grad e: so if it does n't bounce around too much , that 's actually good placement . but it looks like it 's gon na bounce a lot . professor b: , adaptation , non - adaptation , factor of two , . i was go w professor b: , no . it 's tha that we were saying , is how much worse is far than near , . professor b: but for the everybody , it 's little under a factor or two . i was thinking was that maybe , i we could actually t try at least looking at , some of the large vocabulary speech from a far microphone , at least from the good one . , before we 'd get , a hundred and fifty percent error , but if , if we 're getting thirty - five , forty percent , u phd a: actually if you run , though , on a close - talking mike over the whole meeting , during all those silences , you get , like , four hundred percent word error . professor b: , . get all these insertions . but i ' m saying if you do the same limited thing as people have done in switchboard evaluations or as a phd a: where who the speaker is and there 's no overlap ? and you do just the far - field for those regions ? grad e: could we do exactly the same thing that we 're doing now , but do it with a far - field mike ? grad e: cuz we extract the times from the near - field mike , but you use the acoustics from the far - field mike . phd a: right . i understand that . meant that so you have three choices . there 's , you can use times where that person is talking only from the transcripts but the segmentations were synchronized . or you can do a forced alignment on the close - talking to determine that , the , within this segment , these really were the times that this person was talking and elsewhere in the segment other people are overlapping and just front - end those pieces . or you can run it on the whole data , which is , a professor b: but but how did we get the how did we determine the links , that we 're testing on in the we reported ? phd a: in the h l t paper we took segments that are channel time - aligned , which is now h being changed in the transcription process , which is good , and we took cases where the transcribers said there was only one person talking here , because no one else had time any words in that segment and called that " non - overlap " . phd a: yes . tho - good the good numbers . the bad numbers were from the segments where there was overlap . professor b: , we could start with the good ones . but anyway so that we should try it once with the same conditions that were used to create those , and in those same segments just use one of the p z professor b: and then , , if we were getting , what , thirty - five , forty percent , something like that on that particular set , does it go to seventy or eighty ? or , does it use up so much memory we ca n't decode it ? phd a: it might also depend on which speaker th it is and how close they are to the pzm ? i how different they are from each other . professor b: , it 's so i but i would i 'd pick that one . it 'll be less good for some people than for other , but i 'd like to see it on the same exact same data set that we did the other thing on . grad e: actually i sh actually should ' ve picked a different one , because that could be why the pda is worse . because it 's further away from most of the people reading digits . professor b: but the other is , it 's very , even though there 's i ' m the f the sri , front - end has some pre - emphasis , it 's , still , th it 's picking up lots of low - frequency energy . so , even discriminating against it , i ' m some of it 's getting through . but , you 're right . prob - a part of it is just the distance . professor b: , they 're bad . but , if you listen to it , it sounds ok . ? u . grad e: . when you listen to it , the pzm and the pda , th the pda has higher sound floor but not by a lot . it 's really pretty , the same . phd a: remember you saying you got them to be cheap on purpose . cheap in terms of their quality . so . grad e: th - we wanted them to be typical of what would be in a pda . grad e: so they are they 're not the pzm three hundred dollar type . they 're the twenty - five cent , buy them in packs of thousand type . professor b: but , people use those little mikes for everything because they 're really not bad . professor b: , if you 're not doing something ridiculous like feeding it to a speech recognizer , they , you can hear the sou hear the sounds just fine . , it 's they , i it 's more or less the same principles as these other mikes are built under , it 's just that there 's less quality control . they just , churn them out and do n't check them . so that was i interesting result . so like i said , the front - end guys are very much interested in this is as phd f: so so , but where is this now ? , what 's where do we go from here ? phd f: , we so we have a system that works pretty but it 's not , the system that people here are used to using to working with . professor b: and we ' ve talked about this in other contexts we want to have the ability to feed it different features . and then , from the point of view of the front - end research , it would be s , substituting for htk . that 's the key thing . and then if we can feed it different features , then we can try all the different things that we 're trying there . professor b: and then , , also dave is thinking about using the data in different ways , to , explicitly work on reverberation starting with some techniques that some other people have found somewhat useful , and . phd f: so so the key thing that 's missing here is the ability to feed , other features i into the recognizer and also then to train the system . and , es i when chuck will be back but that 's exactly what he 's gon na professor b: h h he 's he 's back , but he drove for fourteen hours an and was n't gon na make it in today . phd f: so , that 's one of the things that he said he would be working on . just t to make that we can do that and . it 's , the front - end is f i tha that 's in the sri recognizer is very in that it does a lot of things on the fly but it unfortunately is not designed and , like the , icsi system is , where you can feed it from a pipeline of the command . so , the what that means probably for the foreseeable future is that you have to , dump out , if you want to use some new features , you have to dump them into individual files and give those files to the recognizer . grad e: we do we tend to do that anyway . so , although you can pipe it as , we tend to do it that way because that way you can concentrate on one block and not keep re - doing it over and over . phd f: , the cumbersome thing is , is that you actually have to dump out little files . so for each segment that you want to recognize you have to dump out a separate file . just like i th like th as if there were these waveform segments , but instead you have feature file segments . but , professor b: ok . so the s the next thing we had on the agenda was something about alignments ? phd a: yes , we have i , did you wanna talk about it , give a i was just telling this to jane and w we were able to get some definite improvement on the forced alignments by looking at them first and then realizing the kinds of errors that were occurring and , some of the errors occurring very frequently are just things like the first word being moved to as early as possible in the recognition , which is a , was both a pruning problem and possibly a problem with needing constraints on word locations . and so we tried both of these st things . we tried saying i got this whacky idea that just from looking at the data , that when people talk their words are usually chunked together . it 's not that they say one word and then there 's a bunch of words together . they 're might say one word and then another word far away if they were doing just backchannels ? but in general , if there 's , like , five or six words and one word 's far away from it , that 's probably wrong on average . so , and then also , ca the pruning , was too severe . phd f: so that 's actually interesting . the pruning was the same value that we used for recognition . and we had lowered that we had used tighter pruning after liz ran some experiments showing that , it runs slower and there 's no real difference in phd a: it 's probably cuz the recognition 's just bad en at a point where it 's bad enough that you do n't lose anything . phd f: you correct . , but it turned out for to get accurate alignments it was really important to open up the pruning significantly . because otherwise it would do greedy alignment , in regions where there was no real speech yet from the foreground speaker . , so that was one big factor that helped improve things and then the other thing was that , as liz said the we f enforce the fact that , the foreground speech has to be continuous . it can not be you can not have a background speech hypothesis in the middle of the foreground speech . you can only have background speech at the beginning and the end . phd a: . , it is n't always true , and what we really want is some clever way to do this , where , , from the data or from maybe some hand - corrected alignments from transcribers that things like words that do occur just by themselves a alone , like backchannels that we did allow to have background speech around it those would be able to do that , but the rest would be constrained . so , we have a version that 's pretty good for the native speakers . i yet about the non - native speakers . and , we also made noise models for the different grouped some of the mouth noises together . , so , and then there 's a background speech model . and we also there was some neat or , interesting cases , like there 's one meeting where , jose 's giving a presentation and he 's talking about , the word " mixed signal " and someone did n't understand , that you were saying " mixed " , morgan . and so your speech - ch was s saying something about mixed signal . phd a: and the next turn was a lot of people saying " mixed " , like " he means mixed signal " or " it 's mixed " . and the word " mixed " in this segment occurs , like , a bunch of times . phd a: and chuck 's on the lapel here , and he also says " mixed " but it 's at the last one , and the aligner th aligns it everywhere else to everybody else 's " mixed " , cuz there 's no adaptation yet . so there 's some issues about u we probably want to adapt at least the foreground speaker . but , i andreas tried adapting both the foreground and a background generic speaker , and that 's actually a little bit of a f funky model . like , it gives you some weird alignments , just because often the background speakers match better to the foreground than the foreground speaker . so there 's some things there , especially when you get lots of the same words , occurring in the phd f: , the you can do better by , cloning so we have a reject phone . and you and what we wanted to try with , once we have this paper written and have a little more time , t cloning that reject model and then one copy of it would be adapted to the foreground speaker to capture the rejects in the foreground , like fragments and , and the other copy would be adapted to the background speaker . phd a: right now the words like partial words are reject models and you normally allow those to match to any word . but then the background speech was also a reject model , and so this constraint of not allowing rejects in between , it needs to differentiate between the two . so just working through a bunch of debugging kinds of issues . and another one is turns , like people starting with " and someone else is " how about " . so the word " is in this segment multiple times , and as soon as it occurs usually the aligner will try to align it to the first person who says it . but then that constraint of , proximity constraint will push it over to the person who really said it in general . grad e: is the proximity constraint a hard constraint , or did you do some probabilistic weighting distance , or ? phd f: we w we it 's straightforward to actually just have a penalty that does n't completely disallows it but discourages it . but , we just did n't have time to play with , tuning yet another parameter . phd f: and really the reason we ca n't do it is just that we do n't have a we do n't have ground truth for these . so , we would need a hand - marked , word - level alignments or at least the boundaries of the speech betw , between the speakers . , and then use that as a reference and tune the parameters of the model , to op to get the best performance . professor b: g given , i wa i was gon na ask you anyway , how you assessed that things were better . phd a: i looked at them . i spent two days , in waves , it was painful because , the alignments share a lot in common , so and you 're yo you 're looking at these segments where there 's a lot of speech . , a lot of them have a lot of words . not by every speaker but by some speaker there 's a lot of words . no , not that if you look at the individual segments from just one person you do n't see a lot of words , phd a: but altogether you 'll see a lot of words up there . and so the reject is also mapping and pauses so i looked at them all in waves and just lined up all the alignments , and , at first it looked like a mess and then the more i looked at it , " ok , it 's moving these words leftward and " , it was n't that bad . it was just doing certain things wrong . so but , i do n't , have time to l to look of them and it would be really useful to have , like , a transcriber who could use waves , just mark , like , the beginning and end of the foreground speaker 's real words like , the beginning of the first word , the end of the last word and then we could , do some adjustments . postdoc c: i ok . i have to ask you something , is i does it have to be waves ? because if we could benefit from what you did , incorporate that into the present transcripts , that would help . and then , the other thing is , i believe that i did hand so . one of these transcripts was gone over by a transcriber and then i hand - marked it myself so that we do have , the beginning and ending of individual utterances . , i did n't do it word level , but in terms so i so for one of the n s a groups . and also i went back to the original one that i first transcribed and did it w , utterance by utterance for that particular one . so you do have if that 's a sufficient unit , that you do have hand - marking for that . but it 'd be wonderful to be able to benefit from your waves . phd a: , jane and i were just in terms of the tool , talking about this . i sue had some reactions . , interface - wise if you 're looking at speech , you wanna be able to know really where the words are . and so , we can give you some examples of what this output looks like , phd a: , and see if you can in maybe incorporate it into the transcriber tool some way , or postdoc c: , i th i ' m thinking just ch e incorporating it into the representation . , if it 's postdoc c: if you have start points , if you have , like , time tags , which is what i assume . is n't that what you ? , see , adam would be phd f: , whatever you use . , we convert it to this format that the , nist scoring tool unders , ctm . conversation time - marked file . and and then that 's the that 's what the postdoc c: if it ? so you would know this more than i would . it seems like she if she 's g if she 's moving time marks around , since our representation in transcriber uses time marks , it seems like there should be some way of using that benefitting from that . phd a: , it wou the advantage would just be that when you brought up a bin you would be able if you were zoomed in enough in transcriber to see all the words , you would be able to , like , have the words located in time , if you wanted to do that . professor b: so so if we e even just had a it sounds like w we almost do . , if we we have two . professor b: just ha , trying out the alignment procedure that you have on that you could actually get something , get an objective measure . phd a: you mean on the hand - marked , so we only r hav i only looked at actually alignments from one meeting that we chose , mr four , just randomly , and phd a: it had average recognition performance in a bunch of speakers and it was a meeting recorder meeting . but , we should try to use what you have . i did re - run recognition on your new version of mr one . phd a: that , actually it was n't the new , it was the medium new . but but we would we should do the latest version . it was the one from last week . postdoc c: yes . yes , i did . and furthermore , i found that there were a certain number where not a lot , but several times i actually moved an utterance from adam 's channel to dan 's or from dan 's to adam 's . so there was some speaker identif and the reason was because i transcribed that at a point before , before we had the multiple audio available f so i could n't switch between the audio . i transcribed it off of the mixed channel entirely , which meant in overlaps , i was at a terrific disadvantage . in addition it was before the channelized , possibility was there . and finally i did it using the speakers of my , of , off the cpu on my machine cuz i did n't have a headphone . so it @ , like , , i in retrospect it would ' ve been good to ha have got i should ' ve gotten a headphone . but in any case , thi this is this was transcribed in a , less optimal way than the ones that came after it , and i was able to , an and this meant that there were some speaker identif identifications which were changes . grad g: is that what you 're referring to ? , cuz there 's this one instance when , you 're running down the stairs . grad g: right . it 's a , i ' ve i ' m very acquainted with this meeting . , s grad g: , i know it by heart . so , there 's one point when you 're running down the stairs . right ? and , like , there 's an interruption . you interrupt somebody , but then there 's no line after that . , there 's no speaker identification after that line . is that what you 're talking about ? or were there mislabellings as far as , like , the a adam was ? postdoc c: for mentioning . , no , tha that that went away a couple of versions ago , postdoc c: . so , with under , listening to the mixed channel , there were times when , as surprising as that is , i got adam 's voice confused with dan 's and vice versa not for long utterances , but jus just a couple of places , and embedde embedded in overlaps . the other thing that was w interesting to me was that i picked up a lot of , backchannels which were hidden in the mixed signal , which , , you c not too surprising . but the other thing that i had n't thought about this , but i thou i wanted to raise this when you were , with respect to also a strategy which might help with the alignments potentially , but that 's when i was looking at these backchannels , they were turning up usually very often in w , i wo n't say " usually " but anyway , very often , i picked them up in a channel w which was the person who had asked a question . s so , like , someone says " an and have you done the so - and - so ? " and then there would be backchannels , but it would be the person who asked the question . other people were n't really doing much backchannelling . and , sometimes you have the , - . postdoc c: , i it would n't be perfect , but it does seem more natural to give a backchannel when you 're somehow involved in the topic , postdoc c: and the most natural way is for you to have initiated the topic by asking a question . phd f: no . it 's actually what 's going on is backchannelling is something that happens in two - party conversations . and if you ask someone a question , you essentially initiating a little two - party conversation . phd f: so then you 're so and then you 're expected to backchannel because the person is addressing you directly and not everybody . postdoc c: exactly my point . an - and so this is the expectation thing that , just the dyadic but in addition , if someone has done this analysis himself and is n't involved in the dyad , but they might also give backchannels to verify what the answer is that this that the answerer 's given phd a: but there are fewer " - huhs " . , just from we were looking at word frequency lists to try to find the cases that we would allow to have the reject words in between in doing the alignment . the ones we would n't constrain to be next to the other words . phd a: and " - " is not as frequent as it would be in switchboard , if you looked at just a word frequency list of one - word short utterances . and " is way up there , but not " - " . and so i was thinking thi it 's not like you 're being encouraged by everybody else to keep talking in the meeting . and , that 's all , i 'll stop there , cuz what you say makes a lot of sense . postdoc c: , an and what you say is the re , o other side of this , which is that , so th there are lots of channels where you do n't have these backchannels , w when a question has been asked and these phd a: even if you consider every other person altogether one person in the meeting , but we 'll find out anyway . we were i the other thing we 're i should say is that we 're gon na , try compare this type of overlap analysis to switchboard , where and callhome , where we have both sides , so that we can try to answer this question of , is there really more overlap in meetings or is it just because we do n't have the other channel in switchboard and we what people are doing . try to create a paper out of that . professor b: . , y you folks have probably already told me , but were you intending to do a eurospeech submission , or ? phd a: , you mean the one due tomorrow ? . , we 're still , like , writing the scripts for doing the research , and we will yes , we 're gon na try . and i was telling don , do not take this as an example of how people should work . phd a: it 'll probably be a little late , but i ' m gon na try it . grad e: it is different . in previous years , eurospeech only had the abstract due by now , not the full paper . and so all our timing was off . i ' ve given up on trying to do digits . do n't think that what i have so far makes a eurospeech paper . phd a: , i ' m no we may be in the same position , and i figured we 'll try , because that 'll at least get us to the point where we have we have this really database format that andreas and i were working out that it it 's not very fancy . it 's just a ascii line by line format , but it does give you information phd a: it , we 're calling these " spurts " after chafe . i was trying to find what 's a word for a continuous region with pauses around it ? professor b: yes . and that 's , i was using that for a while when i was doing the rate of speech , professor b: because i looked up in some books and i found ok , i wanna find a spurt in which professor b: and an because cuz it 's another question about how many pauses they put in between them . phd a: , chafe had this wor it was chafe , or somebody had a the word " spurt " originally , phd a: so we have spurts and we have spurt - ify dot shell and spurt - ify phd f: so what we 're doing , this is just maybe someone has s some ideas about how to do it better , phd f: but we so we 're taking these , alignments from the individual channels . we 're from each alignment we 're producing , one of these ctm files , phd f: which essentially has it 's just a linear sequence of words with the begin times for every word and the duration . phd f: right . but it has one the first column has the meeting name , so it could actually contain several meetings . and the second column is the channel . third column is the , start times of the words and the fourth column is the duration of the words . and then we 're , ok . then we have a messy alignment process where we actually insert into the sequence of words the , tags for , like , where sentence ends of sentence , question marks , various other things . phd a: . these are things that we had don so , don , propagated the punctuation from the original transcriber so whether it was , like , question mark or period or , , comma and things like that , and we kept the and disfluency dashes , kept those in because we wanna know where those are relative to the spurt overlaps sp overlaps , phd f: so so those are actually retro - fitted into the time alignment . and then we merge all the alignments from the various channels and we sort them by time . and then there 's a process where you now determine the spurts . that is actually , no , you do that before you merge the various channels . so you i d identify by some criterion , which is pause length you identify the beginnings and ends of these spurts , and you put another set of tags in there to keep those straight . and then you merge everything in terms of , linearizing the sequence based on the time marks . and then you extract the individual channels again , but this time where the other people start and end talking , where their spurts start and end . and so you extract the individual channels , one sp spurt by spurt as it were . , and inside the words or between the words you now have begin and end tags for overlaps . so , you have everything lined up and in a form where you can look at the individual speakers and how their speech relates to the other speakers ' speech . phd a: , that 's actually really u useful also because even if you were n't studying overlaps , if you wanna get a transcription for the far - field mikes , how are you gon na know which words from which speakers occurred at which times relative to each other ? you have to be able to get a transcript like this anyway , just for doing far - field recognition . , it 's i thi it 's just an issue we have n't dealt with before , how you time - align things that are overlapping anyway . grad e: , s when i came up with the original data suggested data format based on the transcription graph , there 's capability of doing that thing in there . phd a: , this is like a poor man 's ver formatting version . but it 's , it 's clean , it 's just not fancy . phd f: , there 's lots of little things . it 's like there 're twelve different scripts which you run and then at the end you have what you want . but , at the very last stage we throw away the actual time information . all we care about is whether that there 's a certain word was overlapped by someone else 's word . so you at that point , you discretize things into just having overlap or no overlap . because we figure that 's about the level of analysis that we want to do for this paper . but if you wanted to do a more fine - grained analysis and say , how far into the word is the overlap , you could do that . it 's just it 'll just require more postdoc c: what 's interesting is it 's exactly what , i in discussing with , sue about this , she , i indicated that , that 's very important for overlap analysis . phd a: . it 's it 's to know , and also as a human , like , i do n't always hear these in the actual order that they occur . so have two foreground speakers , morgan an and , adam and jane could all be talking , and i could align each of them to be starting their utterance at the correct time , and then look where they are relative to each other , and that 's not really what i heard . postdoc c: when where in psy ps psycho - linguistics you have these experiments where people have perceptual biases a as to what they hear , phd a: and yo then you can bring in the other person . so it 's actually not even possible , for any person to listen to a mixed signal , even equalize , and make that they have all the words in the right order . so , i , we 'll try to write this eurospeech paper . phd a: , we will write it . whether they accept it late or not , i . , and the good thing is that we have it 's a beginning of what don can use to link the prosodic features from each file to each other . phd a: we - i ju otherwise we wo n't get the work done on our deadline . phd f: i , m , u jane likes to look at data . maybe , you could look at this format and see if you find anything interesting . i . professor b: no , it 's that 's the good thing about these pape paper deadlines and , , class projects , and things like that , phd f: th the other thing that yo that you usually do n't tell your graduate students is that these deadlines are actually not that , , strictly enforced , phd f: i because these the conference organizers actually have an interest in getting lots of submissions . , a monetary interest . professor b: there 's a special aurora session and the aurora pe people involved in aurora have till ma - , early may to turn in their paper . phd f: , then you can just maybe you can submit the digits paper on e for the aurora session . phd f: but but the people , a paper that is not on aurora would probably be more interesting at that point grad e: , you meant this was just the digits section . i did n't know you meant it was aurora digits . phd f: , no . if you if you have it 's to if you discuss some relation to the aurora task , like if you use the same grad e: , i . , you could do a paper on what 's wrong with the aurora task by comparing it to other ways of doing it . phd f: how does an aurora system do on , on digits collected in a in this environment ? professor b: it 's a littl little far - fetched . nah , aurora 's pretty closed community . , the people who were involved in the only people who are allowed to test on that are people who made it above a certain threshold in the first round , phd f: , that 's maybe why they do n't f know that they have a crummy system . , a crummy back - end . no , i mean , if you have a very no , i ' m . phd f: no . i did n't mean anybody any particular system . i meant this h t k back - end . phd f: if they i do n't h i do n't have any stock in htk or entropic or anything . professor b: no . , this it 's the htk that is trained on a very limited amount of data . phd f: but so , if you but maybe you should , consider more using more data , or phd f: if yo if you hermetically stay within one task and do n't look left and right , then you 're gon na grad e: and so you can argue about maybe that was n't the right thing to do , but , they had something specific . professor b: but , one of the reasons i have chuck 's messing around with the back - end that you 're not supposed to touch , for the evaluations , yes , we 'll run a version that has n't been touched . but , one of the reasons i have him messing around with that , because it 's an open question that we the answer to . people always say very glibly that i if you s show improvement on a bad system , that does n't mean anything , cuz it may not be show , because , it does n't tell you anything about the good system . and i ' ve always felt that depends . , that if some peopl if you 're actually are getting at something that has some conceptual substance to it , it will port . and , most methods that people now use were originally tried with something that was not their absolute best system at some level . but , sometimes it does n't , port . so that 's an interesting question . if we 're getting three percent error on , u , english , nati native speakers , using the aurora system , and we do some improvements and bring it from three to two , do those same improvements bring , th , the sri system from one point three to , to point eight ? professor b: , so that 's something we can test . anyway . we ' ve covered that one up extremely . professor b: so , so tha so we 'll , maybe you guys 'll have one . , you and , and dan have a paper that 's going in . , that 's pretty solid , on the segmentation . phd f: it 's a eurospeech paper but not related to meetings . but it 's on digits . so , , a colleague at sri developed a improved version of mmie training . and he tested it mostly on digits because it 's a , it does n't take weeks to train it . and got some very impressive results , with , discriminative , gaussian training . , like , error rates go from i , in very noisy environment , like from , i for now i ok , now i have the order of magnit i ' m not about the order of magnitude . was it like from ten percent to eight percent or from e , point , from one percent to point eight percent ? professor b: , let 's see . the only thing we had left was unless somebody else , there 's a couple things . , one is anything that , anybody has to say about saturday ? anything we should do in prep for saturday ? i everybody knows about , u , mari was asking was trying to come up with something like an agenda and we 're fitting around people 's times a bit . but , clearly when we actually get here we 'll move things around this , as we need to , but so you ca n't count on it . professor b: we won we wanna , they 're there 's gon na be , jeff , katrin , mari and two students . so there 's five from there . phd f: maybe the sections that are not right afte , after lunch when everybody 's still munching and phd a: so can you send out a schedule once it , jus ? is is there a r ? professor b: i had n't heard back from mari after i u , brought up the point abou about andreas 's schedule . so , maybe when i get back there 'll be some mail from her . so , i 'll make a phd a: , i know about the first meeting , but the other one that you did , the nsa one , which we had n't done cuz we were n't running recognition on it , because the non - native speaker there were five non - native speakers . but , it would be useful for the to see what we get with that one . professor b: th - that part 's definitely gon na confuse somebody who looks at these later . , this is we 're recording secret nsa meetings ? phd f: the , th the other good thing about the alignments is that , it 's not always the machine 's fault if it does n't work . so , you can actually find , phd f: you can find , problems with the transcripts , , and go back and fix them . phd a: tha - there are some cases like where the wrong speaker , these ca not a lot , but where the wrong person the speech is addre attached to the wrong speaker and you can tell that when you run it . or at least you can get clues to it . phd a: so these are from the early transcriptions that people did on the mixed signals , like what you have . postdoc c: i it does w it also raises the possibility of , using that representation , i , this 'd be something we 'd wanna check , but maybe using that representation for data entry and then displaying it on the channelized , representation , cuz it that the , my preference in terms of , like , looking at the data is to see it in this musical score format . and also , s , sue 's preference as . postdoc c: and and but , this if this is a better interface for making these kinds of , , lo clos local changes , then that 'd be fine , too . i do n't i have no idea . this is something that would need to be checked . professor b: th - the other thing i had actually was , i did n't realize this till today , but , this is , jose 's last day . phd h: and i would like to say very much , to all people in the group and at icsi , phd h: because i enjoyed @ very much , and i ' m by the result of overlapping , because , i have n't good results , yet but , i pretend to continuing out to spain , during the following months , because i have , another ideas but , i have n't enough time to with six months it 's not enough to research , and e i , if , the topic is , so difficult , in my opinion , there is n't professor b: . maybe somebody else will come along and will be , interested in working on it and could start off from where you are also , . they 'd make use of what you ' ve done . phd h: but , i will try to recommend , at , the spanish government but , the following @ scholarship , , will be here more time , because , i in my opinion is better , for us to spend more time here and to work more time i in a topic . phd h: it 's difficult . you e you have , you are lucky , and you find a solution in some few tim , months , ? ok . but , it 's not , common . but , anyway , . very much . , i bring the chocolate , to tear , with you , i hope if you need , something , from us in the future , i will be at spain , to you help , phd h: ye - ye you prefer , to eat , chocolate , at the coffee break , at the ? or you prefer now , before after ? grad e: , we ' ve got ta until after di after we take the mikes off . phd a: we could do digits while other people eat . so it 's background crunching . we do n't have background chewing . phd a: no , we do n't have any data with background eating . i ' m serious . you phd a: and it you have to write down , like , while y what you 're what ch chocolate you 're eating phd a: cuz they might make different sounds , like n nuts chocolate with nuts , chocolate without nuts . professor b: actually actually careful cuz i have a strong allergy to nuts , so i have to figure out one without th phd a: maybe those ? they 're so i . this is , this is a different speech , professor b: . i may hold off . but if i was , but maybe i 'll get some later . , why do n't we ? he he 's worried about a ticket . why do n't we do a simultaneous one ? simultaneous one ? phd a: and you laughed at me , too , f the first time i said that . phd a: you have to , jose , if you have n't done this , you have to plug your ears while you 're t talking postdoc c: you ' ve read digits together with us , have n't you , at the same time ? grad e: we could do the same sheet for everyone . have them all read them at once .
the berkeley meeting recorder group discussed efforts to train and test the aurora group's htk-based recognition system on icsi's digits corpus. members also discussed efforts to produce forced alignments from a selection of meeting recorder data. performance in both tasks was adversely affected by the manner of recording conditions implemented and difficulties attributing utterances to the appropriate speakers. while debugging efforts resulted in improved forced alignments , dealing with mixed channel speech and speaker overlap remains a key objective for future work. the group is additionally focused on a continued ability to feed different features into the recognizer and then train the system accordingly. for comparing meeting recorder digits results , it was decided that the aurora htk-based system should be tested on data from the ti digits corpus. the script for extracting speaker id information will require modifications to obtain a more accurate estimation of the amount of data recorded per speaker. subsequent recognition experiments will look at large vocabulary speech from a far-field microphone ( as performed in switchboard evaluations ). hand-marked , word-level alignments are needed to reveal speaker boundaries and tune the parameters of the model. modifications to the transcriber tool are required for allowing transcribers to simultaneously view the signal in xwaves and see where words are located in time. digits training needs to be performed on a larger data set. a significant loss in recognition resulted from not having included the type of phone-loop adaptation found in the sri system. recognition performance was worse for digits recorded in closed microphone conditions versus those recorded in a studio ( e.g . ti-digits ). a mismatch between the manner in which data were collected and the models used for doing recognition---e.g . bandwidth parameterization and the use of near- versus far-field microphones---was identified. too little data per speaker can have a negative effect on vtl estimation. the pzm channel selected for obtaining digits data was too far away from most of the speakers. current speech alignment techniques assume that foreground speech must be continuous and , barring some isolated words and backchannels , can not cope with overlapping background speech. performing adaptations on both the foreground and background speaker produced a new variety of misalignments , a problem resulting , in part , from the fact that background speakers often match better to foreground conditionss. transcribers occasionally misidentified speakers and omitted backchannels that were more hidden in the mixed signal. good recognition performance was achieved with the lapel microphones. the recognizer performed well on time-aligned segments labelled as 'non-overlap' ( i.e . one person talking ) , while segments labelled as 'overlap' ( i.e . multiple speakers talking at the same time ) yielded poor results. future recognition efforts will include looking at reverberation. forced alignment improvements were gained by examining the types of errors generated and making the necessary adjustments. more accurate alignments were achieved by significantly increasing the pruning value. future alignment efforts will include cloning the reject model , and adapting it to both the foreground and background speaker. members of the group will also compare meeting recorder data with other corpora ( e.g . switchboard ) to determine whether speaker overlap is a feature that is more specific to meetings versus other modes of spoken interaction. a cursory analysis of background speech revealed that backchannels frequently occurred after a question was asked. backchannels also featured a high proportion of 'yeahs' and a substantially fewer 'uh-huhs'. several group members are preparing eurospeech submissions. speakers fe016 and mn017 are preparing a paper about the 'spurt' format , wherein spurts from individual channels---i.e . continuous speech regions delineated by pauses---will be extracted , merged with alignments from different channels , and time-aligned.
###dialogue: grad e: two items , which was , digits and possibly on , forced alignment , which jane said that liz and andreas had in information on , but they did n't , phd f: we should do that second , because liz might join us in time for that . professor b: ok , so there 's digits , alignments , and , i the other thing , which i came unprepared for , is , to dis s see if there 's anything anybody wants to discuss about the saturday meeting . professor b: with with whatever it was , a month and a half ahead of time , the only time we could find in common roughly in common , was on a saturday . ugh . postdoc c: have we thought about having a conference call to include him in more of the meeting ? , i , if we had the telephone on the table phd f: no , actually i have to shuttle kids from various places to various other places . phd f: so . and i do n't have and i do n't , have a cell phone professor b: so we have to equip him with a with a head - mounted , cell phone grad e: ye - we and we 'd have to force you to read lots and lots of digits , phd f: i 'll let i 'd let i let , my five - year - old have a try at the digits , grad e: so , anyway , talk about digits . , did everyone get the results or shall i go over them again ? that it was the only thing that was even slightly surprising was that the lapel did so . , and in retrospect that 's not as surprising as maybe i it should n't have been as surprising as i felt it was . the lapel mike is a very high - quality microphone . and as morgan pointed out , that there are actually some advantages to it in terms of breath noises and clothes rustling if no one else is talking . professor b: , it 's , the bre the breath noises and the mouth clicks and like that , the lapel 's gon na be better on . professor b: the lapel is typically worse on the on clothes rustling , but if no one 's rustling their clothes , grad e: right . , a lot of people are just leaning over and reading the digits , grad g: probably the fact that it picks up other people 's speakers other people 's talking is an indication of that it the fact it is a good microphone . professor b: right . so in the digits , in most cases , there were n't other people talking . phd f: because i suppose you could make some that have that you have to orient towards your mouth , professor b: they 're they 're intended to be omni - directional . and th it 's and because you how people are gon na put them on , . grad e: so , also , andreas , on that one the back part of it should be right against your head . and that will he keep it from flopping aro up and down as much . professor b: . , we actually talked about this in the , front - end meeting this morning , too . much the same thing , professor b: and it was , there the point of interest to the group was primarily that , the system that we had that was based on h t k , that 's used by , all the participants in aurora , was so much worse than the s r professor b: and the interesting thing is that even though , yes , it 's a digits task and that 's a relatively small number of words and there 's a bunch of digits that you train on , it 's just not as good as having a l very large amount of data and training up a good big hmm . , also you had the adaptation in the sri system , which we did n't have in this . so . phd f: , i did , actually . so there was a significant loss from not doing the adaptation . a a couple percent or some , i it overall , i do n't remember , but there was { nonvocalsound } there was a significant , loss or win from adaptation with adaptation . and , that was the phone - loop adaptation . and then there was a very small like point one percent on the natives , win from doing , , adaptation to the recognition hypotheses . and i tried both means adaptation and means and variances , and the variances added another or subtracted another point one percent . so , it 's , that 's the number there . point six , i believe , is what you get with both , means and variance adaptation . professor b: but one thing is that , i would presume hav - have you ever t have you ever tried this exact same recognizer out on the actual ti - digits test set ? phd f: i could and they are using a system that 's , h is actually trained on digits , but h otherwise uses the same , decoder , the same , training methods , and , professor b: , bu although i 'd be it 'd be interesting to just take this exact actual system so that these numbers were comparable and try it out on ti - digits . professor b: , cuz we were getting sub one percent numbers on ti - digits also with the tandem thing . so , one so there were a number of things we noted from this . professor b: one is , the sri system is a lot better than the htk this , very limited training htk system . , but the other is that , the digits recorded here in this room with these close mikes , i , are actually a lot harder than the studio - recording ti - digits . , one reason for that , might be that there 's still even though it 's close - talking , there still is some noise and some room acoustics . and another might be that , i 'd i would presume that in the studio , situation recording read speech that if somebody did something a little funny or n pronounced something a little funny or made a little that they did n't include it , grad e: whereas , i took out the ones that i noticed that were blatant that were correctable . grad e: so that , if someone just read the wrong digit , i corrected it . and then there was another one where jose could n't tell whether i could n't tell whether he was saying zero or six . and i asked him and he could n't tell either . so cut it out . , so e edited out the first , i , word of the utterance . , so there 's a little bit of correction but it 's definitely not as clean as ti - digits . so my expectations is ti - digits would , especially ti - digits is all american english . right ? so it would probably do even a little better still on the sri system , but we could give it a try . phd f: but remember , we 're using a telephone bandwidth front - end here , on this sri system , so , i was that maybe that 's actually a good thing because it gets rid of some of the , the noises , , in the below and above the , speech bandwidth and , i suspect that to get the last bit out of these higher - quality recordings you would have to , use models that , were trained on wider - band data . and we ca n't do that or grad e: , that 's right . i did look that up . i could n't remember whether that was ti - digits or one of the other digit tasks . phd f: right . but but , i would it 's it 's easy enough to try , just run it on phd f: one issue with that is that , the system has this , notion of a speaker to which is used in adaptation , variance norm , both in , mean and variance normalization and also in the vtl estimation . so phd f: do y ? is ? so does so th so does , the ti - digits database have speakers that are known ? and is there enough data or a comparable amount of data to what we have in our recordings here ? grad e: that i . i how many speakers there are , and how many speakers per utterance . professor b: , the other thing would be to do it without the adaptation and compare to these numbers without the adaptation . that would phd f: right . , but i ' m not so much worried about the adaptation , actually , than the , vtl estimation . if you have only one utterance per speaker you might actually screw up on estimating the warping , factor . so , phd f: right . but it 's not the amount of speakers , it 's the num it 's the amount of data per speaker . grad e: right . so we could probably do an extraction that was roughly equivalent . so , although i know how to run it , there are a little a f few details here and there that i 'll have to dig out . phd f: the key so th the system actually extracts the speaker id from the waveform names . phd f: and there 's a script and that is actually all in one script . so there 's this one script that parses waveform names and extracts things like the , speaker , id that can stand in as a speaker id . so , we might have to modify that script to recognize the , speakers , in the , , ti - digits database . phd f: or you can fake names for these waveforms that resemble the names that we use here for the meetings . that would be the , probably the safest way to do grad e: i might have to do that anyway to do because we may have to do an extract to get the amount of data per speaker about right . the other thing is , is n't ti - digits isolated digits ? or is that another one ? i ' m i looked through a bunch of the digits t corp corpora , and now they 're all blurring . cuz one of them was literally people reading a single digit . and then others were connected digits . professor b: . most of ti - digits is connected digits , . the , we had a bellcore corpus that we were using . it was that 's that was isolated digits . phd f: , we can improve these numbers if we care to compr improve them by , not starting with the switchboard models but by taking the switchboard models and doing supervised adaptation on a small amount of digit data collected in this setting . because that would adapt your models to the room acoustics and f for the far - field microphones , to the noise . and that should really improve things , further . and then you use those adapted models , which are not speaker adapted but acous , channel adapted professor b: . but , w when you it depends whether you 're ju were just using this as a starter task for , to get things going for conversational or if we 're really interested i in connected digits . and the answer is both . and for connected digits over the telephone you do n't actually want to put a whole lot of effort into adaptation professor b: because somebody gets on the phone and says a number and then you just want it . you do n't , phd f: , but , i , my impression was that you were actually interested in the far - field microphone , problem , you want to that 's the obvious thing to try . right ? then , because you do n't have any that 's where the most m acoustic mismatch is between the currently used models and the r the set up here . professor b: . so that 'd be anoth another interesting data point . , i ' m saying i if we 'd want to do that as the as postdoc c: if you have a strong fe if you have a strong preference , you could use this . postdoc c: it 's just we think it has some spikes . so , we did n't use that one . phd f: ca n't quite seem to , this contraption around your head is not working so . professor b: too many adju too many adjustments . anyway , what i was saying is that i probably would n't want to see that as like the norm , that we compared all things to . professor b: to , the to have all this ad all this , adaptation . but it 's an important data point , if you 're if the other thing that , what barry was looking at was just that , the near versus far . and , the adaptation would get th some of that . but , even if there was , only a factor of two , like i was saying in the email , that 's a big factor . n grad e: liz , you could also just use the other mike if you 're having problems with that one . postdoc c: . this would be ok . we we think that this has spikes on it , phd f: , mine are too . e th everybody 's ears are too big for these things . phd a: no , my but this is too big for my head . so , it does n't , it 's sit grad e: so if it does n't bounce around too much , that 's actually good placement . but it looks like it 's gon na bounce a lot . professor b: , adaptation , non - adaptation , factor of two , . i was go w professor b: , no . it 's tha that we were saying , is how much worse is far than near , . professor b: but for the everybody , it 's little under a factor or two . i was thinking was that maybe , i we could actually t try at least looking at , some of the large vocabulary speech from a far microphone , at least from the good one . , before we 'd get , a hundred and fifty percent error , but if , if we 're getting thirty - five , forty percent , u phd a: actually if you run , though , on a close - talking mike over the whole meeting , during all those silences , you get , like , four hundred percent word error . professor b: , . get all these insertions . but i ' m saying if you do the same limited thing as people have done in switchboard evaluations or as a phd a: where who the speaker is and there 's no overlap ? and you do just the far - field for those regions ? grad e: could we do exactly the same thing that we 're doing now , but do it with a far - field mike ? grad e: cuz we extract the times from the near - field mike , but you use the acoustics from the far - field mike . phd a: right . i understand that . meant that so you have three choices . there 's , you can use times where that person is talking only from the transcripts but the segmentations were synchronized . or you can do a forced alignment on the close - talking to determine that , the , within this segment , these really were the times that this person was talking and elsewhere in the segment other people are overlapping and just front - end those pieces . or you can run it on the whole data , which is , a professor b: but but how did we get the how did we determine the links , that we 're testing on in the we reported ? phd a: in the h l t paper we took segments that are channel time - aligned , which is now h being changed in the transcription process , which is good , and we took cases where the transcribers said there was only one person talking here , because no one else had time any words in that segment and called that " non - overlap " . phd a: yes . tho - good the good numbers . the bad numbers were from the segments where there was overlap . professor b: , we could start with the good ones . but anyway so that we should try it once with the same conditions that were used to create those , and in those same segments just use one of the p z professor b: and then , , if we were getting , what , thirty - five , forty percent , something like that on that particular set , does it go to seventy or eighty ? or , does it use up so much memory we ca n't decode it ? phd a: it might also depend on which speaker th it is and how close they are to the pzm ? i how different they are from each other . professor b: , it 's so i but i would i 'd pick that one . it 'll be less good for some people than for other , but i 'd like to see it on the same exact same data set that we did the other thing on . grad e: actually i sh actually should ' ve picked a different one , because that could be why the pda is worse . because it 's further away from most of the people reading digits . professor b: but the other is , it 's very , even though there 's i ' m the f the sri , front - end has some pre - emphasis , it 's , still , th it 's picking up lots of low - frequency energy . so , even discriminating against it , i ' m some of it 's getting through . but , you 're right . prob - a part of it is just the distance . professor b: , they 're bad . but , if you listen to it , it sounds ok . ? u . grad e: . when you listen to it , the pzm and the pda , th the pda has higher sound floor but not by a lot . it 's really pretty , the same . phd a: remember you saying you got them to be cheap on purpose . cheap in terms of their quality . so . grad e: th - we wanted them to be typical of what would be in a pda . grad e: so they are they 're not the pzm three hundred dollar type . they 're the twenty - five cent , buy them in packs of thousand type . professor b: but , people use those little mikes for everything because they 're really not bad . professor b: , if you 're not doing something ridiculous like feeding it to a speech recognizer , they , you can hear the sou hear the sounds just fine . , it 's they , i it 's more or less the same principles as these other mikes are built under , it 's just that there 's less quality control . they just , churn them out and do n't check them . so that was i interesting result . so like i said , the front - end guys are very much interested in this is as phd f: so so , but where is this now ? , what 's where do we go from here ? phd f: , we so we have a system that works pretty but it 's not , the system that people here are used to using to working with . professor b: and we ' ve talked about this in other contexts we want to have the ability to feed it different features . and then , from the point of view of the front - end research , it would be s , substituting for htk . that 's the key thing . and then if we can feed it different features , then we can try all the different things that we 're trying there . professor b: and then , , also dave is thinking about using the data in different ways , to , explicitly work on reverberation starting with some techniques that some other people have found somewhat useful , and . phd f: so so the key thing that 's missing here is the ability to feed , other features i into the recognizer and also then to train the system . and , es i when chuck will be back but that 's exactly what he 's gon na professor b: h h he 's he 's back , but he drove for fourteen hours an and was n't gon na make it in today . phd f: so , that 's one of the things that he said he would be working on . just t to make that we can do that and . it 's , the front - end is f i tha that 's in the sri recognizer is very in that it does a lot of things on the fly but it unfortunately is not designed and , like the , icsi system is , where you can feed it from a pipeline of the command . so , the what that means probably for the foreseeable future is that you have to , dump out , if you want to use some new features , you have to dump them into individual files and give those files to the recognizer . grad e: we do we tend to do that anyway . so , although you can pipe it as , we tend to do it that way because that way you can concentrate on one block and not keep re - doing it over and over . phd f: , the cumbersome thing is , is that you actually have to dump out little files . so for each segment that you want to recognize you have to dump out a separate file . just like i th like th as if there were these waveform segments , but instead you have feature file segments . but , professor b: ok . so the s the next thing we had on the agenda was something about alignments ? phd a: yes , we have i , did you wanna talk about it , give a i was just telling this to jane and w we were able to get some definite improvement on the forced alignments by looking at them first and then realizing the kinds of errors that were occurring and , some of the errors occurring very frequently are just things like the first word being moved to as early as possible in the recognition , which is a , was both a pruning problem and possibly a problem with needing constraints on word locations . and so we tried both of these st things . we tried saying i got this whacky idea that just from looking at the data , that when people talk their words are usually chunked together . it 's not that they say one word and then there 's a bunch of words together . they 're might say one word and then another word far away if they were doing just backchannels ? but in general , if there 's , like , five or six words and one word 's far away from it , that 's probably wrong on average . so , and then also , ca the pruning , was too severe . phd f: so that 's actually interesting . the pruning was the same value that we used for recognition . and we had lowered that we had used tighter pruning after liz ran some experiments showing that , it runs slower and there 's no real difference in phd a: it 's probably cuz the recognition 's just bad en at a point where it 's bad enough that you do n't lose anything . phd f: you correct . , but it turned out for to get accurate alignments it was really important to open up the pruning significantly . because otherwise it would do greedy alignment , in regions where there was no real speech yet from the foreground speaker . , so that was one big factor that helped improve things and then the other thing was that , as liz said the we f enforce the fact that , the foreground speech has to be continuous . it can not be you can not have a background speech hypothesis in the middle of the foreground speech . you can only have background speech at the beginning and the end . phd a: . , it is n't always true , and what we really want is some clever way to do this , where , , from the data or from maybe some hand - corrected alignments from transcribers that things like words that do occur just by themselves a alone , like backchannels that we did allow to have background speech around it those would be able to do that , but the rest would be constrained . so , we have a version that 's pretty good for the native speakers . i yet about the non - native speakers . and , we also made noise models for the different grouped some of the mouth noises together . , so , and then there 's a background speech model . and we also there was some neat or , interesting cases , like there 's one meeting where , jose 's giving a presentation and he 's talking about , the word " mixed signal " and someone did n't understand , that you were saying " mixed " , morgan . and so your speech - ch was s saying something about mixed signal . phd a: and the next turn was a lot of people saying " mixed " , like " he means mixed signal " or " it 's mixed " . and the word " mixed " in this segment occurs , like , a bunch of times . phd a: and chuck 's on the lapel here , and he also says " mixed " but it 's at the last one , and the aligner th aligns it everywhere else to everybody else 's " mixed " , cuz there 's no adaptation yet . so there 's some issues about u we probably want to adapt at least the foreground speaker . but , i andreas tried adapting both the foreground and a background generic speaker , and that 's actually a little bit of a f funky model . like , it gives you some weird alignments , just because often the background speakers match better to the foreground than the foreground speaker . so there 's some things there , especially when you get lots of the same words , occurring in the phd f: , the you can do better by , cloning so we have a reject phone . and you and what we wanted to try with , once we have this paper written and have a little more time , t cloning that reject model and then one copy of it would be adapted to the foreground speaker to capture the rejects in the foreground , like fragments and , and the other copy would be adapted to the background speaker . phd a: right now the words like partial words are reject models and you normally allow those to match to any word . but then the background speech was also a reject model , and so this constraint of not allowing rejects in between , it needs to differentiate between the two . so just working through a bunch of debugging kinds of issues . and another one is turns , like people starting with " and someone else is " how about " . so the word " is in this segment multiple times , and as soon as it occurs usually the aligner will try to align it to the first person who says it . but then that constraint of , proximity constraint will push it over to the person who really said it in general . grad e: is the proximity constraint a hard constraint , or did you do some probabilistic weighting distance , or ? phd f: we w we it 's straightforward to actually just have a penalty that does n't completely disallows it but discourages it . but , we just did n't have time to play with , tuning yet another parameter . phd f: and really the reason we ca n't do it is just that we do n't have a we do n't have ground truth for these . so , we would need a hand - marked , word - level alignments or at least the boundaries of the speech betw , between the speakers . , and then use that as a reference and tune the parameters of the model , to op to get the best performance . professor b: g given , i wa i was gon na ask you anyway , how you assessed that things were better . phd a: i looked at them . i spent two days , in waves , it was painful because , the alignments share a lot in common , so and you 're yo you 're looking at these segments where there 's a lot of speech . , a lot of them have a lot of words . not by every speaker but by some speaker there 's a lot of words . no , not that if you look at the individual segments from just one person you do n't see a lot of words , phd a: but altogether you 'll see a lot of words up there . and so the reject is also mapping and pauses so i looked at them all in waves and just lined up all the alignments , and , at first it looked like a mess and then the more i looked at it , " ok , it 's moving these words leftward and " , it was n't that bad . it was just doing certain things wrong . so but , i do n't , have time to l to look of them and it would be really useful to have , like , a transcriber who could use waves , just mark , like , the beginning and end of the foreground speaker 's real words like , the beginning of the first word , the end of the last word and then we could , do some adjustments . postdoc c: i ok . i have to ask you something , is i does it have to be waves ? because if we could benefit from what you did , incorporate that into the present transcripts , that would help . and then , the other thing is , i believe that i did hand so . one of these transcripts was gone over by a transcriber and then i hand - marked it myself so that we do have , the beginning and ending of individual utterances . , i did n't do it word level , but in terms so i so for one of the n s a groups . and also i went back to the original one that i first transcribed and did it w , utterance by utterance for that particular one . so you do have if that 's a sufficient unit , that you do have hand - marking for that . but it 'd be wonderful to be able to benefit from your waves . phd a: , jane and i were just in terms of the tool , talking about this . i sue had some reactions . , interface - wise if you 're looking at speech , you wanna be able to know really where the words are . and so , we can give you some examples of what this output looks like , phd a: , and see if you can in maybe incorporate it into the transcriber tool some way , or postdoc c: , i th i ' m thinking just ch e incorporating it into the representation . , if it 's postdoc c: if you have start points , if you have , like , time tags , which is what i assume . is n't that what you ? , see , adam would be phd f: , whatever you use . , we convert it to this format that the , nist scoring tool unders , ctm . conversation time - marked file . and and then that 's the that 's what the postdoc c: if it ? so you would know this more than i would . it seems like she if she 's g if she 's moving time marks around , since our representation in transcriber uses time marks , it seems like there should be some way of using that benefitting from that . phd a: , it wou the advantage would just be that when you brought up a bin you would be able if you were zoomed in enough in transcriber to see all the words , you would be able to , like , have the words located in time , if you wanted to do that . professor b: so so if we e even just had a it sounds like w we almost do . , if we we have two . professor b: just ha , trying out the alignment procedure that you have on that you could actually get something , get an objective measure . phd a: you mean on the hand - marked , so we only r hav i only looked at actually alignments from one meeting that we chose , mr four , just randomly , and phd a: it had average recognition performance in a bunch of speakers and it was a meeting recorder meeting . but , we should try to use what you have . i did re - run recognition on your new version of mr one . phd a: that , actually it was n't the new , it was the medium new . but but we would we should do the latest version . it was the one from last week . postdoc c: yes . yes , i did . and furthermore , i found that there were a certain number where not a lot , but several times i actually moved an utterance from adam 's channel to dan 's or from dan 's to adam 's . so there was some speaker identif and the reason was because i transcribed that at a point before , before we had the multiple audio available f so i could n't switch between the audio . i transcribed it off of the mixed channel entirely , which meant in overlaps , i was at a terrific disadvantage . in addition it was before the channelized , possibility was there . and finally i did it using the speakers of my , of , off the cpu on my machine cuz i did n't have a headphone . so it @ , like , , i in retrospect it would ' ve been good to ha have got i should ' ve gotten a headphone . but in any case , thi this is this was transcribed in a , less optimal way than the ones that came after it , and i was able to , an and this meant that there were some speaker identif identifications which were changes . grad g: is that what you 're referring to ? , cuz there 's this one instance when , you 're running down the stairs . grad g: right . it 's a , i ' ve i ' m very acquainted with this meeting . , s grad g: , i know it by heart . so , there 's one point when you 're running down the stairs . right ? and , like , there 's an interruption . you interrupt somebody , but then there 's no line after that . , there 's no speaker identification after that line . is that what you 're talking about ? or were there mislabellings as far as , like , the a adam was ? postdoc c: for mentioning . , no , tha that that went away a couple of versions ago , postdoc c: . so , with under , listening to the mixed channel , there were times when , as surprising as that is , i got adam 's voice confused with dan 's and vice versa not for long utterances , but jus just a couple of places , and embedde embedded in overlaps . the other thing that was w interesting to me was that i picked up a lot of , backchannels which were hidden in the mixed signal , which , , you c not too surprising . but the other thing that i had n't thought about this , but i thou i wanted to raise this when you were , with respect to also a strategy which might help with the alignments potentially , but that 's when i was looking at these backchannels , they were turning up usually very often in w , i wo n't say " usually " but anyway , very often , i picked them up in a channel w which was the person who had asked a question . s so , like , someone says " an and have you done the so - and - so ? " and then there would be backchannels , but it would be the person who asked the question . other people were n't really doing much backchannelling . and , sometimes you have the , - . postdoc c: , i it would n't be perfect , but it does seem more natural to give a backchannel when you 're somehow involved in the topic , postdoc c: and the most natural way is for you to have initiated the topic by asking a question . phd f: no . it 's actually what 's going on is backchannelling is something that happens in two - party conversations . and if you ask someone a question , you essentially initiating a little two - party conversation . phd f: so then you 're so and then you 're expected to backchannel because the person is addressing you directly and not everybody . postdoc c: exactly my point . an - and so this is the expectation thing that , just the dyadic but in addition , if someone has done this analysis himself and is n't involved in the dyad , but they might also give backchannels to verify what the answer is that this that the answerer 's given phd a: but there are fewer " - huhs " . , just from we were looking at word frequency lists to try to find the cases that we would allow to have the reject words in between in doing the alignment . the ones we would n't constrain to be next to the other words . phd a: and " - " is not as frequent as it would be in switchboard , if you looked at just a word frequency list of one - word short utterances . and " is way up there , but not " - " . and so i was thinking thi it 's not like you 're being encouraged by everybody else to keep talking in the meeting . and , that 's all , i 'll stop there , cuz what you say makes a lot of sense . postdoc c: , an and what you say is the re , o other side of this , which is that , so th there are lots of channels where you do n't have these backchannels , w when a question has been asked and these phd a: even if you consider every other person altogether one person in the meeting , but we 'll find out anyway . we were i the other thing we 're i should say is that we 're gon na , try compare this type of overlap analysis to switchboard , where and callhome , where we have both sides , so that we can try to answer this question of , is there really more overlap in meetings or is it just because we do n't have the other channel in switchboard and we what people are doing . try to create a paper out of that . professor b: . , y you folks have probably already told me , but were you intending to do a eurospeech submission , or ? phd a: , you mean the one due tomorrow ? . , we 're still , like , writing the scripts for doing the research , and we will yes , we 're gon na try . and i was telling don , do not take this as an example of how people should work . phd a: it 'll probably be a little late , but i ' m gon na try it . grad e: it is different . in previous years , eurospeech only had the abstract due by now , not the full paper . and so all our timing was off . i ' ve given up on trying to do digits . do n't think that what i have so far makes a eurospeech paper . phd a: , i ' m no we may be in the same position , and i figured we 'll try , because that 'll at least get us to the point where we have we have this really database format that andreas and i were working out that it it 's not very fancy . it 's just a ascii line by line format , but it does give you information phd a: it , we 're calling these " spurts " after chafe . i was trying to find what 's a word for a continuous region with pauses around it ? professor b: yes . and that 's , i was using that for a while when i was doing the rate of speech , professor b: because i looked up in some books and i found ok , i wanna find a spurt in which professor b: and an because cuz it 's another question about how many pauses they put in between them . phd a: , chafe had this wor it was chafe , or somebody had a the word " spurt " originally , phd a: so we have spurts and we have spurt - ify dot shell and spurt - ify phd f: so what we 're doing , this is just maybe someone has s some ideas about how to do it better , phd f: but we so we 're taking these , alignments from the individual channels . we 're from each alignment we 're producing , one of these ctm files , phd f: which essentially has it 's just a linear sequence of words with the begin times for every word and the duration . phd f: right . but it has one the first column has the meeting name , so it could actually contain several meetings . and the second column is the channel . third column is the , start times of the words and the fourth column is the duration of the words . and then we 're , ok . then we have a messy alignment process where we actually insert into the sequence of words the , tags for , like , where sentence ends of sentence , question marks , various other things . phd a: . these are things that we had don so , don , propagated the punctuation from the original transcriber so whether it was , like , question mark or period or , , comma and things like that , and we kept the and disfluency dashes , kept those in because we wanna know where those are relative to the spurt overlaps sp overlaps , phd f: so so those are actually retro - fitted into the time alignment . and then we merge all the alignments from the various channels and we sort them by time . and then there 's a process where you now determine the spurts . that is actually , no , you do that before you merge the various channels . so you i d identify by some criterion , which is pause length you identify the beginnings and ends of these spurts , and you put another set of tags in there to keep those straight . and then you merge everything in terms of , linearizing the sequence based on the time marks . and then you extract the individual channels again , but this time where the other people start and end talking , where their spurts start and end . and so you extract the individual channels , one sp spurt by spurt as it were . , and inside the words or between the words you now have begin and end tags for overlaps . so , you have everything lined up and in a form where you can look at the individual speakers and how their speech relates to the other speakers ' speech . phd a: , that 's actually really u useful also because even if you were n't studying overlaps , if you wanna get a transcription for the far - field mikes , how are you gon na know which words from which speakers occurred at which times relative to each other ? you have to be able to get a transcript like this anyway , just for doing far - field recognition . , it 's i thi it 's just an issue we have n't dealt with before , how you time - align things that are overlapping anyway . grad e: , s when i came up with the original data suggested data format based on the transcription graph , there 's capability of doing that thing in there . phd a: , this is like a poor man 's ver formatting version . but it 's , it 's clean , it 's just not fancy . phd f: , there 's lots of little things . it 's like there 're twelve different scripts which you run and then at the end you have what you want . but , at the very last stage we throw away the actual time information . all we care about is whether that there 's a certain word was overlapped by someone else 's word . so you at that point , you discretize things into just having overlap or no overlap . because we figure that 's about the level of analysis that we want to do for this paper . but if you wanted to do a more fine - grained analysis and say , how far into the word is the overlap , you could do that . it 's just it 'll just require more postdoc c: what 's interesting is it 's exactly what , i in discussing with , sue about this , she , i indicated that , that 's very important for overlap analysis . phd a: . it 's it 's to know , and also as a human , like , i do n't always hear these in the actual order that they occur . so have two foreground speakers , morgan an and , adam and jane could all be talking , and i could align each of them to be starting their utterance at the correct time , and then look where they are relative to each other , and that 's not really what i heard . postdoc c: when where in psy ps psycho - linguistics you have these experiments where people have perceptual biases a as to what they hear , phd a: and yo then you can bring in the other person . so it 's actually not even possible , for any person to listen to a mixed signal , even equalize , and make that they have all the words in the right order . so , i , we 'll try to write this eurospeech paper . phd a: , we will write it . whether they accept it late or not , i . , and the good thing is that we have it 's a beginning of what don can use to link the prosodic features from each file to each other . phd a: we - i ju otherwise we wo n't get the work done on our deadline . phd f: i , m , u jane likes to look at data . maybe , you could look at this format and see if you find anything interesting . i . professor b: no , it 's that 's the good thing about these pape paper deadlines and , , class projects , and things like that , phd f: th the other thing that yo that you usually do n't tell your graduate students is that these deadlines are actually not that , , strictly enforced , phd f: i because these the conference organizers actually have an interest in getting lots of submissions . , a monetary interest . professor b: there 's a special aurora session and the aurora pe people involved in aurora have till ma - , early may to turn in their paper . phd f: , then you can just maybe you can submit the digits paper on e for the aurora session . phd f: but but the people , a paper that is not on aurora would probably be more interesting at that point grad e: , you meant this was just the digits section . i did n't know you meant it was aurora digits . phd f: , no . if you if you have it 's to if you discuss some relation to the aurora task , like if you use the same grad e: , i . , you could do a paper on what 's wrong with the aurora task by comparing it to other ways of doing it . phd f: how does an aurora system do on , on digits collected in a in this environment ? professor b: it 's a littl little far - fetched . nah , aurora 's pretty closed community . , the people who were involved in the only people who are allowed to test on that are people who made it above a certain threshold in the first round , phd f: , that 's maybe why they do n't f know that they have a crummy system . , a crummy back - end . no , i mean , if you have a very no , i ' m . phd f: no . i did n't mean anybody any particular system . i meant this h t k back - end . phd f: if they i do n't h i do n't have any stock in htk or entropic or anything . professor b: no . , this it 's the htk that is trained on a very limited amount of data . phd f: but so , if you but maybe you should , consider more using more data , or phd f: if yo if you hermetically stay within one task and do n't look left and right , then you 're gon na grad e: and so you can argue about maybe that was n't the right thing to do , but , they had something specific . professor b: but , one of the reasons i have chuck 's messing around with the back - end that you 're not supposed to touch , for the evaluations , yes , we 'll run a version that has n't been touched . but , one of the reasons i have him messing around with that , because it 's an open question that we the answer to . people always say very glibly that i if you s show improvement on a bad system , that does n't mean anything , cuz it may not be show , because , it does n't tell you anything about the good system . and i ' ve always felt that depends . , that if some peopl if you 're actually are getting at something that has some conceptual substance to it , it will port . and , most methods that people now use were originally tried with something that was not their absolute best system at some level . but , sometimes it does n't , port . so that 's an interesting question . if we 're getting three percent error on , u , english , nati native speakers , using the aurora system , and we do some improvements and bring it from three to two , do those same improvements bring , th , the sri system from one point three to , to point eight ? professor b: , so that 's something we can test . anyway . we ' ve covered that one up extremely . professor b: so , so tha so we 'll , maybe you guys 'll have one . , you and , and dan have a paper that 's going in . , that 's pretty solid , on the segmentation . phd f: it 's a eurospeech paper but not related to meetings . but it 's on digits . so , , a colleague at sri developed a improved version of mmie training . and he tested it mostly on digits because it 's a , it does n't take weeks to train it . and got some very impressive results , with , discriminative , gaussian training . , like , error rates go from i , in very noisy environment , like from , i for now i ok , now i have the order of magnit i ' m not about the order of magnitude . was it like from ten percent to eight percent or from e , point , from one percent to point eight percent ? professor b: , let 's see . the only thing we had left was unless somebody else , there 's a couple things . , one is anything that , anybody has to say about saturday ? anything we should do in prep for saturday ? i everybody knows about , u , mari was asking was trying to come up with something like an agenda and we 're fitting around people 's times a bit . but , clearly when we actually get here we 'll move things around this , as we need to , but so you ca n't count on it . professor b: we won we wanna , they 're there 's gon na be , jeff , katrin , mari and two students . so there 's five from there . phd f: maybe the sections that are not right afte , after lunch when everybody 's still munching and phd a: so can you send out a schedule once it , jus ? is is there a r ? professor b: i had n't heard back from mari after i u , brought up the point abou about andreas 's schedule . so , maybe when i get back there 'll be some mail from her . so , i 'll make a phd a: , i know about the first meeting , but the other one that you did , the nsa one , which we had n't done cuz we were n't running recognition on it , because the non - native speaker there were five non - native speakers . but , it would be useful for the to see what we get with that one . professor b: th - that part 's definitely gon na confuse somebody who looks at these later . , this is we 're recording secret nsa meetings ? phd f: the , th the other good thing about the alignments is that , it 's not always the machine 's fault if it does n't work . so , you can actually find , phd f: you can find , problems with the transcripts , , and go back and fix them . phd a: tha - there are some cases like where the wrong speaker , these ca not a lot , but where the wrong person the speech is addre attached to the wrong speaker and you can tell that when you run it . or at least you can get clues to it . phd a: so these are from the early transcriptions that people did on the mixed signals , like what you have . postdoc c: i it does w it also raises the possibility of , using that representation , i , this 'd be something we 'd wanna check , but maybe using that representation for data entry and then displaying it on the channelized , representation , cuz it that the , my preference in terms of , like , looking at the data is to see it in this musical score format . and also , s , sue 's preference as . postdoc c: and and but , this if this is a better interface for making these kinds of , , lo clos local changes , then that 'd be fine , too . i do n't i have no idea . this is something that would need to be checked . professor b: th - the other thing i had actually was , i did n't realize this till today , but , this is , jose 's last day . phd h: and i would like to say very much , to all people in the group and at icsi , phd h: because i enjoyed @ very much , and i ' m by the result of overlapping , because , i have n't good results , yet but , i pretend to continuing out to spain , during the following months , because i have , another ideas but , i have n't enough time to with six months it 's not enough to research , and e i , if , the topic is , so difficult , in my opinion , there is n't professor b: . maybe somebody else will come along and will be , interested in working on it and could start off from where you are also , . they 'd make use of what you ' ve done . phd h: but , i will try to recommend , at , the spanish government but , the following @ scholarship , , will be here more time , because , i in my opinion is better , for us to spend more time here and to work more time i in a topic . phd h: it 's difficult . you e you have , you are lucky , and you find a solution in some few tim , months , ? ok . but , it 's not , common . but , anyway , . very much . , i bring the chocolate , to tear , with you , i hope if you need , something , from us in the future , i will be at spain , to you help , phd h: ye - ye you prefer , to eat , chocolate , at the coffee break , at the ? or you prefer now , before after ? grad e: , we ' ve got ta until after di after we take the mikes off . phd a: we could do digits while other people eat . so it 's background crunching . we do n't have background chewing . phd a: no , we do n't have any data with background eating . i ' m serious . you phd a: and it you have to write down , like , while y what you 're what ch chocolate you 're eating phd a: cuz they might make different sounds , like n nuts chocolate with nuts , chocolate without nuts . professor b: actually actually careful cuz i have a strong allergy to nuts , so i have to figure out one without th phd a: maybe those ? they 're so i . this is , this is a different speech , professor b: . i may hold off . but if i was , but maybe i 'll get some later . , why do n't we ? he he 's worried about a ticket . why do n't we do a simultaneous one ? simultaneous one ? phd a: and you laughed at me , too , f the first time i said that . phd a: you have to , jose , if you have n't done this , you have to plug your ears while you 're t talking postdoc c: you ' ve read digits together with us , have n't you , at the same time ? grad e: we could do the same sheet for everyone . have them all read them at once . ###summary: the berkeley meeting recorder group discussed efforts to train and test the aurora group's htk-based recognition system on icsi's digits corpus. members also discussed efforts to produce forced alignments from a selection of meeting recorder data. performance in both tasks was adversely affected by the manner of recording conditions implemented and difficulties attributing utterances to the appropriate speakers. while debugging efforts resulted in improved forced alignments , dealing with mixed channel speech and speaker overlap remains a key objective for future work. the group is additionally focused on a continued ability to feed different features into the recognizer and then train the system accordingly. for comparing meeting recorder digits results , it was decided that the aurora htk-based system should be tested on data from the ti digits corpus. the script for extracting speaker id information will require modifications to obtain a more accurate estimation of the amount of data recorded per speaker. subsequent recognition experiments will look at large vocabulary speech from a far-field microphone ( as performed in switchboard evaluations ). hand-marked , word-level alignments are needed to reveal speaker boundaries and tune the parameters of the model. modifications to the transcriber tool are required for allowing transcribers to simultaneously view the signal in xwaves and see where words are located in time. digits training needs to be performed on a larger data set. a significant loss in recognition resulted from not having included the type of phone-loop adaptation found in the sri system. recognition performance was worse for digits recorded in closed microphone conditions versus those recorded in a studio ( e.g . ti-digits ). a mismatch between the manner in which data were collected and the models used for doing recognition---e.g . bandwidth parameterization and the use of near- versus far-field microphones---was identified. too little data per speaker can have a negative effect on vtl estimation. the pzm channel selected for obtaining digits data was too far away from most of the speakers. current speech alignment techniques assume that foreground speech must be continuous and , barring some isolated words and backchannels , can not cope with overlapping background speech. performing adaptations on both the foreground and background speaker produced a new variety of misalignments , a problem resulting , in part , from the fact that background speakers often match better to foreground conditionss. transcribers occasionally misidentified speakers and omitted backchannels that were more hidden in the mixed signal. good recognition performance was achieved with the lapel microphones. the recognizer performed well on time-aligned segments labelled as 'non-overlap' ( i.e . one person talking ) , while segments labelled as 'overlap' ( i.e . multiple speakers talking at the same time ) yielded poor results. future recognition efforts will include looking at reverberation. forced alignment improvements were gained by examining the types of errors generated and making the necessary adjustments. more accurate alignments were achieved by significantly increasing the pruning value. future alignment efforts will include cloning the reject model , and adapting it to both the foreground and background speaker. members of the group will also compare meeting recorder data with other corpora ( e.g . switchboard ) to determine whether speaker overlap is a feature that is more specific to meetings versus other modes of spoken interaction. a cursory analysis of background speech revealed that backchannels frequently occurred after a question was asked. backchannels also featured a high proportion of 'yeahs' and a substantially fewer 'uh-huhs'. several group members are preparing eurospeech submissions. speakers fe016 and mn017 are preparing a paper about the 'spurt' format , wherein spurts from individual channels---i.e . continuous speech regions delineated by pauses---will be extracted , merged with alignments from different channels , and time-aligned.
4
grad f: , maybe it 's the turning off and turning on of the mike , right ? professor b: , that 's the mike number there , mike number five , and channel four . professor b: yes , ok . so i also copied the results that we all got in the mail from ogi and we 'll go through them also . so where are we on our runs ? phd d: so . we so as i was already said , we mainly focused on four features . phd d: the plp , the plp with jrasta , the msg , and the mfcc from the baseline aurora . phd d: , and we focused for the test part on the english and the italian . we ' ve trained several neural networks on so on the ti - digits english and on the italian data and also on the broad english french and spanish databases . mmm , so there 's our result tables here , for the tandem approach , and , actually what we @ observed is that if the network is trained on the task data it works pretty . professor b: he 's facing this way . what ? ok , this would be a good section for our silence detection . phd d: , so if the network is trained on the task data tandem works pretty . and actually we have , results are similar only on , phd a: do you mean if it 's trained only on on data from just that task , that language ? phd d: just that task . but actually we did n't train network on both types of data phonetically ba phonetically balanced data and task data . professor b: but the question is how much worse is it if you have broad data ? , my assump from what i saw from the earlier results , i last week , was that , if you trained on one language and tested on another , say , that the results were relatively poor . professor b: but but the question is if you train on one language but you have a broad coverage and then test in another , does that is that improve things i c in comparison ? professor b: no , no . different lang so if you train on ti - digits and test on italian digits , you do poorly , let 's say . professor b: so i ' m just imagining . e so , you did n't train on timit and test on italian digits , say ? phd d: we no , we did four testing , actually . the first testing is with task data so , with nets trained on task data . so for italian on the italian speech @ . the second test is trained on a single language with broad database , but the same language as the t task data . but for italian we choose spanish which we assume is close to italian . the third test is by using , the three language database phd d: and the fourth test is excluding from these three languages the language that is the task language . professor b: , ok , so , that is what i wanted to know . was n't saying it very , i . phd d: , . so for ti - digits for ins example when we go from ti - digits training to timit training we lose around ten percent , . the error rate increase u of ten percent , relative . phd d: so this is not so bad . and then when we jump to the multilingual data it 's it become worse and , around , let 's say , twenty perc twenty percent further . phd a: and so , remind me , the multilingual is just the broad data . right ? it 's not the digits . so it 's the combination of two things there . it 's removing the task specific training and it 's adding other languages . phd d: so , when it 's trained on the multilingual broad data or number so , the ratio of our error rates with the baseline error rate is around one point one . professor b: yes . and it 's something like one point three of the i i if you compare everything to the first case at the baseline , you get something like one point one for the using the same language but a different task , and something like one point three for three languages broad . phd d: no no . same language we are at for at english at o point eight . so it improves , compared to the baseline . but le - let me . professor b: ok , fine . let 's let 's use the conventional meaning of baseline . i by baseline here i meant using the task specific data . professor b: but , because that 's what you were just doing with this ten percent . so i was just trying to understand that . so if we call a factor of w just one , just normalized to one , the word error rate that you have for using ti - digits as training and ti - digits as test , different words , i ' m , but , the same task and so on . if we call that " one " , then what you 're saying is that the word error rate for the same language but using different training data than you 're testing on , say timit and , it 's one point one . professor b: and if it 's you do go to three languages including the english , it 's something like one point three . that 's what you were just saying , . phd d: if we exclude english , there is not much difference with the data with english . professor b: that 's interesting . do you see ? because , so no , that 's important . so what it 's saying here is just that yes , bro004bdialogueact254 594747 602997 b professor s%-- -1 0 there is a reduction in performance , when you do n't have the s when you do n't have um bro004adialogueact255 604185 604765 a phd s^2 -1 0 task data . bro004bdialogueact256 607002 607312 b professor s^co -1 0 wait a minute , bro004bdialogueact257 607312 608322 b professor s%-- -1 0 th the bro004ddialogueact258 611155 611665 d phd b -1 0 hmm . bro004bdialogueact259 612719 614399 b professor s^ar|s^ba -1 0 no , actually it 's interesting . bro004bdialogueact260 614399 617829 b professor s -1 0 so it 's so when you go to a different task , there 's actually not so different . bro004bdialogueact261 617829 618739 b professor s%-- -1 0 it 's when you went to these bro004bdialogueact262 618869 620769 b professor qw -1 0 so what 's the difference between two and three ? bro004bdialogueact263 621009 623109 b professor qw^e^rt -1 0 between the one point one case and the one point four case ? bro004bdialogueact264 623109 623739 b professor s -1 0 i ' m confused . bro004adialogueact265 626848 627668 a phd s -1 0 it 's multilingual . bro004ddialogueact266 629317 629557 d phd s^aa -1 0 yeah . bro004ddialogueact267 629557 632107 d phd s^m^na^rt -1 0 the only difference it 's is that it 's multilingual bro004ddialogueact268 633057 634347 d phd fh -1 0 um bro004bdialogueact269 633735 637295 b professor s -1 0 cuz in both of those cases , you do n't have the same task . bro004ddialogueact270 634347 635407 d phd b -1 0 yeah . bro004ddialogueact271 638007 638447 d phd s^aa -1 0 yeah . bro004bdialogueact272 638794 642434 b professor qy%-- -1 0 so is the training data for the for this one point four case bro004bdialogueact273 643404 646174 b professor qy^rt -1 0 does it include the training data for the one point one case ? bro004ddialogueact274 647436 647836 d phd h|s^aa -1 0 uh . bro004fdialogueact275 648263 649463 f grad fg|s^arp -1 0 yeah , a fraction of it . bro004ddialogueact276 650056 650726 d phd s^arp -1 0 a part of it , bro004ddialogueact277 650726 650826 d phd b -1 0 yeah . bro004bdialogueact278 655089 656339 b professor qw -1 0 how m how much bigger is it ? bro004ddialogueact279 65925 663188 d phd h|s -1 0 um it 's two times , bro004fdialogueact280 661792 662842 f grad h -1 0 yeah , . bro004ddialogueact281 663188 663588 d phd s^rt%-- -1 0 actually ? bro004ddialogueact282 663798 664018 d phd b -1 0 yeah . bro004ddialogueact283 665663 675907 d phd h|s -1 0 um . the english data no , the multilingual databases are two times the broad english data . bro004ddialogueact284 678015 681165 d phd s -1 0 we just wanted to keep this , w , not too huge . bro004ddialogueact285 681165 681495 d phd fh -1 0 so . bro004bdialogueact286 682481 684181 b professor s -1 0 so it 's two times , bro004bdialogueact287 684441 686901 b professor s^bu -1 0 but it includes the broad english data . bro004ddialogueact288 684908 685318 d phd s^aa -1 0 i . bro004ddialogueact289 685318 685508 d phd qy%- -1 0 do you bro004ddialogueact290 687621 689196 d phd h|s^aa -1 0 uh , . bro004bdialogueact291 689882 692542 b professor s -1 0 and the broad english data is what you got this one point one with . bro004bdialogueact292 692642 693782 b professor s^bu -1 0 so that 's timit basically bro004bdialogueact293 693782 694032 b professor qy^d^g^rt -1 0 right ? bro004ddialogueact294 694075 694195 d phd s^aa -1 0 yeah . bro004fdialogueact295 694113 694443 f grad b -1 0 mm - . bro004bdialogueact296 694691 695991 b professor s -1 0 so it 's band - limited timit . bro004bdialogueact299 697128 699794 b professor s -1 0 this is all eight kilohertz sampling . bro004ddialogueact298 696949 697319 d phd b -1 0 mm - . bro004fdialogueact297 696615 697025 f grad b -1 0 mm - . bro004ddialogueact300 697904 698014 d phd b -1 0 yeah . bro004fdialogueact301 698751 699961 f grad % - -1 0 downs bro004fdialogueact302 699961 700211 f grad s^aa -1 0 right . bro004bdialogueact303 700842 707212 b professor s -1 0 so you have band - limited timit , gave you almost as good as a result as using ti - digits on a ti - digits test . bro004bdialogueact304 708299 708679 b professor b -1 0 ok ? bro004ddialogueact305 708985 709335 d phd b -1 0 hmm ? bro004bdialogueact306 709636 712096 b professor h -1 0 um and um bro004bdialogueact307 712776 722266 b professor s -1 0 but , when you add in more training data but keep the neural net the same size , it performs worse on the ti - digits . bro004bdialogueact308 723156 727126 b professor s^bu^rt -1 0 ok , now all of this is this is noisy ti - digits , i assume ? bro004bdialogueact310 72785 72881 b professor qy^d^g^rt -1 0 both training and test ? bro004ddialogueact309 727535 727735 d phd s^aa -1 0 bro004bdialogueact311 729633 729893 b professor s^aa -1 0 yeah . bro004bdialogueact312 729953 730253 b professor s^bk -1 0 ok . bro004bdialogueact313 731628 732788 b professor h -1 0 um bro004bdialogueact314 736507 741188 b professor s^bk|s%-- -1 0 ok . we we may just need to bro004bdialogueact315 741879 746129 b professor s -1 0 so it 's interesting that h going to a different task did n't seem to hurt us that much , bro004bdialogueact316 746129 750261 b professor s%-- -1 0 and going to a different language um bro004bdialogueact317 752732 753802 b professor s^rt%-- -1 0 it does n't seem to matter bro004bdialogueact318 753802 756342 b professor s^df -1 0 the difference between three and four is not particularly great , bro004bdialogueact319 756342 760912 b professor s -1 0 so that means that whether you have the language in or not is not such a big deal . bro004ddialogueact320 761212 762142 d phd b -1 0 mmm . bro004bdialogueact321 762256 773026 b professor s^cs -1 0 it sounds like we may need to have more of things that are similar to a target language bro004bdialogueact322 773026 776786 b professor s -1 0 or . you have the same number of parameters in the neural net , bro004bdialogueact323 776786 778646 b professor s -1 0 you have n't increased the size of the neural net , bro004bdialogueact324 778896 786656 b professor s -1 0 and maybe there 's just not enough complexity to it to represent the variab increased variability in the training set . bro004bdialogueact325 786656 787786 b professor s%-- -1 0 that that could be . bro004bdialogueact326 789504 790994 b professor h|qw^tc%-- -1 0 um so , what about bro004bdialogueact327 791144 798544 b professor qr^d^rt%-- -1 0 so these are results with th that you 're describing now , that they are pretty similar for the different features or bro004ddialogueact328 798694 799774 d phd h|s -1 0 uh , let me check . bro004ddialogueact329 799774 800204 d phd h -1 0 uh . bro004bdialogueact330 800749 801059 b professor b -1 0 yeah . bro004ddialogueact331 801261 803261 d phd s^rt +1 so . this was for the plp , bro004bdialogueact332 803664 804114 b professor s^bk -1 0 yeah . bro004ddialogueact333 803911 806601 d phd s%-- -1 0 um . the bro004ddialogueact334 806601 806891 d phd b -1 0 yeah . bro004ddialogueact335 807231 811171 d phd s%-- +1 for the plp with jrasta the we bro004ddialogueact336 811291 820051 d phd s^rt +1 this is quite the same tendency , with a slight increase of the error rate , if we go to timit . bro004ddialogueact337 820841 824241 d phd s^rt +1 and then it 's it gets worse with the multilingual . bro004ddialogueact338 825301 828919 d phd h|s^bk -1 0 um . bro004ddialogueact339 828919 832639 d phd s^rt +1 there there is a difference actually with b between plp and jrasta bro004ddialogueact340 832639 844559 d phd s^rt +1 is that jrasta seems to perform better with the highly mismatched condition but slightly worse for the matched condition . bro004ddialogueact341 845921 846631 d phd b -1 0 mmm . bro004bdialogueact342 846892 847942 b professor s^cs -1 0 i have a suggestion , actually , bro004bdialogueact343 847942 849132 b professor s -1 0 even though it 'll delay us slightly , bro004bdialogueact344 849132 852952 b professor qy^cs^rt -1 0 would you mind running into the other room and making copies of this ? bro004bdialogueact345 852952 854092 b professor s^df%-- -1 0 cuz we 're all bro004bdialogueact347 854272 856662 b professor s^df%-- -1 0 if we c if we could look at it , while we 're talking , it 'd be bro004ddialogueact346 853826 854536 d phd s^aa -1 0 yeah , . bro004ddialogueact348 854686 856066 d phd s^bk -1 0 ok . bro004bdialogueact349 856662 862412 b professor s^cc^j -1 0 uh , i 'll sing a song or dance while you do it , too . bro004adialogueact351 861351 862371 a phd fg -1 0 so bro004fdialogueact350 861088 861598 f grad s^bk -1 0 alright . bro004adialogueact352 862731 863041 a phd s^co -1 0 go ahead . bro004adialogueact353 863201 865811 a phd s^cc -1 0 ah , while you 're gone i 'll ask s some of my questions . bro004bdialogueact354 863871 864091 b professor b -1 0 yeah . bro004adialogueact355 866519 866899 a phd hx -1 0 um . bro004bdialogueact356 867411 867601 b professor b -1 0 yeah . bro004bdialogueact357 86912 872097 b professor s^t3 -1 0 uh , this way and just slightly to the left , bro004bdialogueact358 872097 872237 b professor b -1 0 yeah . bro004adialogueact359 873996 878796 a phd qy^rt -1 0 the what was was this number forty bro004adialogueact360 879016 881236 a phd qrr^d^rt -1 0 or it was roughly the same as this one , he said ? bro004adialogueact362 881576 884086 a phd s^rt -1 0 when you had the two language versus the three language ? bro004bdialogueact361 881429 881839 b professor h -1 0 um . bro004bdialogueact363 88412 88498 b professor s^na -1 0 that 's what he was saying . bro004adialogueact364 88495 88617 a phd s^bu -1 0 that 's where he removed english , bro004fdialogueact365 885039 885379 f grad b -1 0 yeah . bro004adialogueact366 88617 88655 a phd qy^d^g^rt -1 0 right ? bro004bdialogueact367 886641 887041 b professor s^aa^m -1 0 right . bro004fdialogueact368 887448 890308 f grad s -1 0 it sometimes , actually , depends on what features you 're using . bro004bdialogueact369 890685 890985 b professor fg -1 0 yeah . bro004bdialogueact370 891725 893535 b professor s%-- -1 0 but but i it sounds like bro004fdialogueact371 891996 930033 f grad b -1 0 um , but he - . bro004bdialogueact372 894395 895285 b professor s^ba -1 0 i mean . that 's interesting bro004bdialogueact373 895285 907235 b professor s -1 0 because it seems like what it 's saying is not so much that you got hurt because you did n't have so much representation of english , bro004bdialogueact374 907895 910645 b professor s -1 0 because in the other case you do n't get hurt any more , bro004bdialogueact375 910805 917315 b professor s -1 0 at least when it seemed like it might simply be a case that you have something that is just much more diverse , bro004adialogueact376 917348 917828 a phd b -1 0 mm - . bro004bdialogueact377 917655 920005 b professor s -1 0 but you have the same number of parameters representing it . bro004adialogueact378 920099 920619 a phd fg -1 0 mm - . bro004adialogueact379 921119 921589 a phd s -1 0 i wonder bro004adialogueact380 921879 926339 a phd qy^rt -1 0 were all three of these nets using the same output ? bro004adialogueact381 926339 931189 a phd qy^d^rt -1 0 this multi - language labelling ? bro004fdialogueact382 931206 934546 f grad s -1 0 he was using sixty - four phonemes from sampa . bro004adialogueact383 934959 935879 a phd s^bk -1 0 ok , ok . bro004fdialogueact384 935398 935808 f grad b -1 0 yeah . bro004adialogueact385 939743 950773 a phd s^bu -1 0 so this would from this you would say , it does n't really matter if we put finnish into the training of the neural net , if there 's gon na be , finnish in the test data . " , it 's it sounds , we have to be careful , cuz we have n't gotten a good result yet . and comparing different bad results can be tricky . but i it does suggest that it 's not so much cross language as cross type of speech . it 's it 's but we did , the other thing i was asking him , though , is that in the case you do have to be careful because of com compounded results . we got some earlier results in which you trained on one language and tested on another and you did n't have three , but you just had one language . so you trained on one type of digits and tested on another . didn - was n't there something of that ? where you , say , trained on spanish and tested on ti - digits , or the other way around ? something like that ? professor b: there was something like that , that he showed me last week . we 'll have to till we get professor b: , this may have been what i was asking before , stephane , but , was n't there something that you did , where you trained on one language and tested on another ? no mixture but just phd d: training on a single language , you mean , and testing on the other one ? phd d: so the only task that 's similar to this is the training on two languages , and that professor b: but we ' ve done a bunch of things where we just trained on one language . right ? , you have n't done all your tests on multiple languages . phd d: , no . either thi this is test with the same language but from the broad data , or it 's test with different languages also from the broad data , excluding the so , it 's three or three and four . phd d: . no . you mean training digits on one language and using the net to recognize on the other ? professor b: see , you showed me something like that last week . you had a you had a little professor b: so , wha what 's the this this chart this table that we 're looking at is , show is all testing for ti - digits , or ? phd d: and the first four rows is - matched , then the s the second group of four rows is mismatched , and finally highly mismatched . and then the lower part is for italian and it 's the same thing . phd d: so . it 's it 's the htk results , . so it 's htk training testings with different features and what appears in the left column is the networks that are used for doing this . professor b: , what was is that i what was it that you had done last week when you showed do you remember ? wh - when you showed me the your table last week ? phd d: you mean the htk aurora baseline ? it 's the one hundred number . it 's , all these numbers are the ratio with respect to the baseline . phd d: so , seventy point two means that we reduced the error rate by thirty percent . professor b: ok , so if we take let 's see plp with on - line normalization and delta - del so that 's this thing you have circled here in the second column , and " multi - english " refers to what ? phd d: to timit . then you have mf , ms and me which are for french , spanish and english . and , actually i forgot to say that the multilingual net are trained on features without the s derivatives but with increased frame numbers . and we can see on the first line of the table that it 's slightly worse when we do n't use delta but it 's not that much . professor b: so w so , i ' m . i missed that . what 's mf , ms and me ? professor b: so , it 's broader vocabulary . then and ok so what i ' m what i saw in your smaller chart that i was thinking of was there were some numbers i saw , that included these multiple languages and it and i was seeing that it got worse . that was all it was . you had some very limited results that at that point which showed having in these other languages . it might have been just this last category , having two languages broad that were where english was removed . so that was cross language and the result was quite poor . what i we had n't seen yet was that if you added in the english , it 's still poor . now , what 's the noise condition of the training data professor b: , this is what you were explaining . the noise condition is the same it 's the same aurora noises , in all these cases for the training . so there 's not a statistical sta a strong st statistically different noise characteristic between the training and test professor b: so there 's some a an effect from having these this broader coverage now i what we should try doing with this is try testing these on u this same thing on you probably must have this lined up to do . to try the same t with the exact same training , do testing on the other languages . on on so . , i , a minute . you have this here , for the italian . that 's right . ok , so , so . phd d: , so for the italian the results are stranger so what appears is that perhaps spanish is not very close to italian because , when using the network trained only on spanish it 's the error rate is almost twice the baseline error rate . professor b: , let 's see . is there any difference in so it 's in the so you 're saying that when you train on english and test on no , you do n't have training on english testing phd d: , for for the italian part the networks are trained with noise from aurora ti - digits , phd d: and perhaps the noise are quite different from the noises in the speech that italian . professor b: do we have any test sets in any other language that have the same noise as in the aurora ? phd a: can i ask something real quick ? in in the upper part in the english , it looks like the very best number is sixty point nine ? and that 's in the third section in the upper part under plp jrasta , the middle column ? i is that a noisy condition ? so that 's matched training ? is that what that is ? phd d: it 's no , the third part , so it 's highly mismatched . so . training and test noise are different . phd a: so why do you get your best number in would n't you get your best number in the clean case ? phd a: , ok so these are not ok , alright , i see . and then so , in the in the non - mismatched clean case , your best one was under mfcc ? that sixty - one point four ? phd d: . but it 's not a clean case . it 's a noisy case but training and test noises are the same . professor b: ok ? so , this will take some looking at , thinking about . but , what is currently running , that 's , i that just filling in the holes here or ? phd d: , no we do n't plan to fill the holes but actually there is something important , is that we made a lot of assumption concerning the on - line normalization and we just noticed recently that the approach that we were using was not leading to very good results when we used the straight features to htk . so d if you look at the left of the table , the first row , with eighty - six , one hundred , and forty - three and seventy - five , these are the results we obtained for italian with straight mmm , plp features using on - line normalization . mmm . and the , mmm what 's in the table , just at the left of the plp twelve on - line normalization column , so , the numbers seventy - nine , fifty - four and forty - two are the results obtained by pratibha with his on - line normalization her on - line normalization approach . phd d: just so these are the results of ogi with on - line normalization and straight features to htk . and the previous result , eighty - six and so on , are with our features straight to htk . phd d: so what we see that is there is that the way we were doing this was not correct , but still the networks are very good . when we use the networks our number are better that pratibha results . phd d: there were diff there were different things and , the first thing is the mmm , alpha value . so , the recursion part . , i used point five percent , which was the default value in the programs here . and pratibha used five percent . so it adapts more quickly , but , i assume that this was not important because previous results from dan and show that the both values g give the same results . it was true on ti - digits but it 's not true on italian . , second thing is the initialization of the . actually , what we were doing is to start the recursion from the beginning of the utterance . and using initial values that are the global mean and variances measured across the whole database . and pratibha did something different is that he she initialed the values of the mean and variance by computing this on the twenty - five first frames of each utterance . mmm . there were other minor differences , the fact that she used fifteen dissities instead s instead of thirteen , and that she used c - zero instead of log energy . , but the main differences concerns the recursion . so . , i changed the code and now we have a baseline that 's similar to the ogi baseline . we it it 's slightly different because i do n't exactly initialize the same way she does . actually i start , mmm , i do n't to a fifteen twenty - five frames before computing a mean and the variance to e to start the recursion . i use the on - line scheme and only start the re recursion after the twenty - five twenty - fifth frame . but , it 's similar . so i retrained the networks with these , the networks are retaining with these new features . so what i expect is that these numbers will a little bit go down but perhaps not so much because the neural networks learn perhaps to even if the features are not normalized . it it will learn how to normalize and professor b: ok , but that given the pressure of time we probably want to draw because of that especially , we wanna draw some conclusions from this , do some reductions in what we 're looking at , and make some strong decisions for what we 're gon na do testing on before next week . so do you are you w did you have something going on , on the side , with multi - band or on this , phd d: i no , i we plan to start this so , act actually we have discussed @ , these what we could do more as a research and we were thinking perhaps that the way we use the tandem is not , there is perhaps a flaw in the because we trained the networks if we trained the networks on the on a language and a t or a specific task , what we ask is to the network is to put the bound the decision boundaries somewhere in the space . phd d: and mmm and ask the network to put one , at one side of the for a particular phoneme at one side of the boundary decision boundary and one for another phoneme at the other side . and so there is reduction of the information there that 's not correct because if we change task and if the phonemes are not in the same context in the new task , the decision boundaries are not should not be at the same place . phd d: but the way the feature gives the the way the network gives the features is that it reduce completely the it removes completely the information a lot of information from the features by placing the decision boundaries at optimal places for one data but this is not the case for another data . phd d: so what we were thinking about is perhaps one way to solve this problem is increase the number of outputs of the neural networks . doing something like , phonemes within context and , context dependent phonemes . professor b: maybe . , you could make the same argument , it 'd be just as legitimate , for hybrid systems as . professor b: , but it 's still true that what you 're doing is you 're ignoring you 're coming up with something to represent , whether it 's a distribution , probability distribution or features , you 're coming up with a set of variables that are representing , things that vary w over context . , and you 're putting it all together , ignoring the differences in context . that that 's true for the hybrid system , it 's true for a tandem system . so , for that reason , when you in a hybrid system , when you incorporate context one way or another , you do get better scores . but i it 's a big deal to get that . i ' m and once you the other thing is that once you represent start representing more and more context it is much more specific to a particular task in language . so , the acoustics associated with a particular context , you may have some kinds of contexts that will never occur in one language and will occur frequently in the other , so the qu the issue of getting enough training for a particular context becomes harder . we already actually do n't have a huge amount of training data phd d: , but mmm , the way we do it now is that we have a neural network and the net network is trained almost to give binary decisions . and binary decisions about phonemes . nnn it 's professor b: almost . but it does give a distribution . it 's and it is true that if there 's two phones that are very similar , that the i it may prefer one but it will give a reasonably high value to the other , too . phd d: but so it 's almost binary decisions and the idea of using more classes is to get something that 's less binary decisions . professor b: no , but it would still be even more of a binary decision . it it 'd be even more of one . because then you would say that in that this phone in this context is a one , but the same phone in a slightly different context is a zero . professor b: that would be even more distinct of a binary decision . i actually would have thought you 'd wanna go the other way and have fewer classes . professor b: , the thing i was arguing for before , but again which i do n't think we have time to try , is something in which you would modify the code so you could train to have several outputs on and use articulatory features cuz then that would go that would be much broader and cover many different situations . but if you go to very fine categories , it 's very binary . phd d: , but , perhaps you 're right , but you have more classes so you have more information in your features . so , you have more information in the phd d: posteriors vector which means that but still the information is relevant because it 's information that helps to discriminate , if it 's possible to be able to discriminate among the phonemes in context . professor b: we could disagree about it at length but the real thing is if you 're interested in it you 'll probably try it and we 'll see . but but what i ' m more concerned with now , as an operational level , is , what do we do in four or five days ? , and so we have to be concerned with are we gon na look at any combinations of things , once the nets get retrained so you have this problem out of it . , are we going to look at multi - band ? are we gon na look at combinations of things ? , what questions are we gon na ask , now that , we should probably turn shortly to this o g i note . , how are we going to combine with what they ' ve been focusing on ? , we have n't been doing any of the l d a rasta thing . and they , although they do n't talk about it in this note , there 's , the issue of the mu law business versus the logarithm , so . so what i what is going on right now ? what 's right you ' ve got nets retraining , are there is there are there any h t k trainings testings going on ? phd e: i ' m trying the htk with , plp twelve on - line delta - delta and msg filter together . professor b: old one . so it 's using all the nets for that but again we have the hope that it we have the hope that it maybe it 's not making too much difference , phd d: , i will start work on multi - band . and we plan to work also on the idea of using both features and net outputs . and we think that with this approach perhaps we could reduce the number of outputs of the neural network . , so , get simpler networks , because we still have the features . so we have come up with different broad phonetic categories . and we have three types of broad phonetic classes . , something using place of articulation which leads to nine , broad classes . , another which is based on manner , which is also something like nine classes . and then , something that combine both , and we have twenty f twenty - five ? phd d: twenty - seven broad classes . so like , , i , like back vowels , front vowels . professor b: so you have two net or three nets ? was this ? how many how many nets do you have ? no nets . phd d: , it 's just were we just changing the labels to retrain nets with fewer out outputs . professor b: . the software currently just has a allows for , the one hot output . so you 're having multiple nets and combining them , how are you coming up with if you say if you have a place characteristic and a manner characteristic , how do you professor b: so you 're going the other way of what you were saying a bit ago instead of phd d: i do n't think this will work alone . it will get worse because , i believe the effect that of too reducing too much the information is what happens phd d: because there is perhaps one important thing that the net brings , and ogi show showed that , is the distinction between sp speech and silence because these nets are trained on - controlled condition . the labels are obtained on clean speech , and we add noise after . so this is one thing but perhaps , something intermediary using also some broad classes could bring so much more information . professor b: so so again then we have these broad classes and , somewhat broad . , it 's twenty - seven instead of sixty - four , . and you have the original features . which are plp , . and then , just to remind me , all of that goes into , that all of that is transformed by , k - kl , or ? professor b: whether you would transform together or just one . might wanna try it both ways . but that 's interesting . so that 's something that you 're you have n't trained yet but are preparing to train , and so hynek will be here monday . monday or tuesday . so , we need to choose the experiments carefully , so we can get key questions answered before then and leave other ones aside even if it leaves incomplete tables someplace , it 's really time to choose . , let me pass this out , . these are did did i interrupt you ? professor b: ok , so , something i asked so they 're doing the vad i they mean voice activity detection so again , it 's the silence so they ' ve just trained up a net which has two outputs , i believe . i asked hynek whether i have n't talked to sunil i asked hynek whether they compared that to just taking the nets we already had and summing up the probabilities . to get the speech voice activity detection , or else just using the silence , if there 's only one silence output . and , he did n't think they had , . but on the other hand , maybe they can get by with a smaller net and maybe sometimes you do n't run the other , maybe there 's a computational advantage to having a separate net , anyway . so their the results look pretty good . , not uniformly . , there 's a an example or two that you can find , where it made it slightly worse , but in all but a couple examples . phd e: but they have a question of the result . how are trained the lda filter ? how obtained the lda filter ? phd e: yes , the lda filter needs some training set to obtain the filter . maybe i exactly how they are obtained . phd e: , lda filter need a set of training to obtain the filter . and maybe for the italian , for the td te on for finnish , these filter are obtained with their own training set . professor b: yes , i . that 's that 's so that 's a very good question , then now that it i understand it . it 's , bro004bdialogueact1066 281192 281318 b professor qh^bs -1 0 where does the lda come from ? in the in earlier experiments , they had taken lda from a completely different database , phd e: , because maybe it the same situation that the neural network training with their own set . professor b: so that 's a good question . where does it come from ? , i . , but to tell you the truth , i was n't actually looking at the lda so much when i was looking at it i was mostly thinking about the vad . and , it ap what does asp ? that 's professor b: cuz there 's " baseline aurora " above it . and it 's this is mostly better than baseline , although in some cases it 's a little worse , in a couple cases . professor b: , it says what it is . but i do n't how that 's different from professor b: this was this is the same point we were at when we were up in oregon . phd d: i think it 's the c - zero using c - zero instead of log energy . phd a: they s they say in here that the vad is not used as an additional feature . professor b: . so so what they 're doing here is , i if you look down at the block diagram , they estimate they get a they get an estimate of whether it 's speech or silence , professor b: and then they have a median filter of it . and so , they 're trying to find stretches . the median filter is enforcing a i it having some continuity . you find stretches where the combination of the frame wise vad and the median filter say that there 's a stretch of silence . and then it 's going through and just throwing the data away . phd a: so it 's i do n't understand . you mean it 's throwing out frames ? before professor b: it 's throwing out chunks of frames , . there 's the median filter is enforcing that it 's not gon na be single cases of frames , or isolated frames . so it 's throwing out frames and , what i do n't understand is how they 're doing this with h t professor b: y you it stretches again . for single frames it would be pretty hard . but if you say speech starts here , speech ends there . professor b: , so in the i in the decoding , you 're saying that we 're gon na decode from here to here . professor b: they 're treating it , like , it 's not isolated word , but connected , the phd a: in the text they say that this is a tentative block diagram of a possible configuration we could think of . so that sounds like they 're not doing that yet . professor b: . no they have numbers though , so they 're doing something like that . that they 're what by tha that is they 're trying to come up with a block diagram that 's plausible for the standard . in other words , it 's from the point of view of reducing the number of bits you have to transmit it 's not a bad idea to detect silence anyway . phd a: . i ' m just wondering what exactly did they do up in this table if it was n't this . professor b: but it 's that 's i certainly it would be tricky about it intrans in transmitting voice , for listening to , is that these kinds of things cut speech off a lot . and so professor b: it does introduce delays but they 're claiming that it 's within the boundaries of it . and the lda introduces delays , and b what he 's suggesting this here is a parallel path so that it does n't introduce , any more delay . i it introduces two hundred milliseconds of delay but at the same time the lda down here i wh what 's the difference between tlda and slda ? professor b: , you would know that . so . the temporal lda does include the same so that he , by saying this is a b a tentative block di diagram means if you construct it this way , this delay would work in that way and then it 'd be ok . they they clearly did actually remove silent sections in order because they got these word error rate results . so that it 's to do that in this because , it 's gon na give a better word error result and therefore will help within an evaluation . whereas to whether this would actually be in a final standard , i . , as , part of the problem with evaluation right now is that the word models are pretty bad and nobody wants has approached improving them . so it 's possible that a lot of the problems with so many insertions and would go away if they were better word models to begin with . so this might just be a temporary thing . but , on the other hand , and maybe it 's a decent idea . so the question we 're gon na wanna go through next week when hynek shows up i is given that we ' ve been if you look at what we ' ve been trying , we 're looking at , by then i , combinations of features and multi - band , and we ' ve been looking at cross - language , cross task issues . and they ' ve been not so much looking at the cross task multiple language issues . but they ' ve been looking at these issues . at the on - line normalization and the voice activity detection . and i when he comes here we 're gon na have to start deciding about what do we choose from what we ' ve looked at to blend with some group of things in what they ' ve looked at and once we choose that , how do we split up the effort ? , because we still have even once we choose , we ' ve still got another month or so , there 's holidays in the way , but the evaluation data comes january thirty - first so there 's still a fair amount of time to do things together it 's just that they probably should be somewhat more coherent between the two sites in that amount of time . phd a: when they removed the silence frames , did they insert some a marker so that the recognizer knows it 's knows when it 's time to back trace ? professor b: , see they , they 're . i the specifics of how they 're doing it . they 're they 're getting around the way the recognizer works because they 're not allowed to , change the scripts for the recognizer , i believe . professor b: . , that 's what i had thought . but i do n't think they are . that 's what the way i had imagined would happen is that on the other side , you p put some low level noise . probably do n't want all zeros . most recognizers do n't like zeros but , put some epsilon in or some rand epsilon random variable in . professor b: maybe not a constant but it does n't , do n't like to divide by the variance of that , but it 's phd a: that 's right . but something that what is something that is very distinguishable from speech . so that the silence model in htk will always pick it up . professor b: so i that 's what they would do . or else , maybe there is some indicator to tell it to start and stop , i . but whatever they did , they have to play within the rules of this specific evaluation . we c we can find out . phd a: cuz you got ta do something . otherwise , if it 's just a bunch of speech , stuck together professor b: no they 're it would do badly and it did n't so badly , so they did something . professor b: . so , ok , so this brings me up to date a bit . it hopefully brings other people up to date a bit . and , i wanna look at these numbers off - line a little bit and think about it and talk with everybody , outside of this meeting . , but no it sounds like there are the usual number of little problems and bugs and but it sounds like they 're getting ironed out . and now we 're seem to be in a position to actually , look at and compare things . so that 's pretty good . i what the one of the things i wonder about , coming back to the first results you talked about , is how much , things could be helped by more parameters . and how many more parameters we can afford to have , in terms of the computational limits . because anyway when we go to twice as much data and have the same number of parameters , particularly when it 's twice as much data and it 's quite diverse , i wonder if having twice as many parameters would help . , just have a bigger hidden layer . but i doubt it would help by forty per cent . but but just curious . how are we doing on the resources ? disk , and professor b: , we 're gon na get a replacement server that 'll be a faster server , actually . professor b: we have the little tiny ibm machine that might someday grow up to be a big ibm machine . it 's got s slots for eight , ibm was donating five , we only got two so far , processors . we had originally hoped we were getting eight hundred megahertz processors . they ended up being five fifty . so instead of having eight processors that were eight hundred megahertz , we ended up with two that are five hundred and fifty megahertz . and more are supposed to come soon and there 's only a moderate amount of dat of memory . so i do n't think anybody has been sufficiently excited by it to spend much time with it , but hopefully , they 'll get us some more parts , soon and , that 'll be once we get it populated , that 'll be a machine . we will ultimately get eight processors in there . and and a amount of memory . so it 'll be a pr pretty fast linux machine . grad g: and if we can do things on linux , some of the machines we have going already , like swede ? it seems pretty fast . but fudge is pretty fast too . professor b: , you can check with dave johnson . , it 's the machine is just sitting there . and it does have two processors , and somebody could do , check out the multi - threading libraries . and i it 's possible that the , i the prudent thing to do would be for somebody to do the work on getting our code running on that machine with two processors even though there are n't five or eight . there 's there 's gon na be debugging hassles and then we 'd be set for when we did have five or eight , to have it really be useful . but . notice how i said somebody and turned my head your direction . that 's one thing you do n't get in these recordings . you do n't get the visuals grad g: i is it mostly the neural network trainings that are slowing us down or the htk runs that are slowing us down ? professor b: , yes . , is n't that right ? you 're held up by both , if the if the neural net trainings were a hundred times faster you still would n't be anything running through these a hundred times faster because you 'd be stuck by the htk trainings , but if the htk they 're both it sounded like they were roughly equal ? is that about right ? grad g: because , that 'll be running linux , and sw - swede and fudge are already running linux so , i could try to get the train the neural network trainings or the htk running under linux , and to start with i ' m wondering which one i should pick first . professor b: , probably the neural net cuz it 's probably it 's they both htk we use for this aurora , it 's not clear yet what we 're gon na use for trainings , there 's the trainings is it the training that takes the time , or the decoding ? , is it about equal between the two ? for for aurora ? professor b: , i how we can i how to do we have htk source ? is that you would think that would fairly trivially the training would , anyway , th the testing i do n't think would parallelize all that . but that you could certainly do d , distributed , no , it 's the each individual sentence is pretty tricky to parallelize . but you could split up the sentences in a test set . phd a: they have a they have a thing for doing that and th they have for awhile , in h t and you can parallelize the training . and run it on several machines and it just keeps counts . and there 's something a final thing that you run and it accumulates all the counts together . i do n't what their scripts are set up to do for the aurora , but professor b: something that we have n't really settled on yet is other than this aurora , what do we do , large vocabulary training slash testing for tandem systems . cuz we had n't really done much with tandem systems for larger . cuz we had this one collaboration with cmu and we used sphinx . , we 're also gon na be collaborating with sri and we have their have theirs . so i . so the advantage of going with the neural net thing is that we 're gon na use the neural net trainings , no matter what , for a lot of the things we 're doing , professor b: whereas , w exactly which hmm gaussian - mixture - based hmm thing we use is gon na depend so with that , maybe we should go to our { nonvocalsound } digit recitation task . and , it 's about eleven fifty . canned . , i can start over here . great , could you give adam a call . tell him to he 's at two nine seven . we can @ herve 's coming tomorrow , herve will be giving a talk , talk at eleven . did , did everybody sign these consent er everybody has everyone signed a consent form before , on previous meetings ? you do n't have to do it again each time microphones off
the meeting was dominated by a discussion of the first results coming in. there have been four types of test , in which the training data varies , and a variety of input features have been tried. the process and results were explained to the group , the implications of the results discussed , and plans for moving forward were made. there was also discussion of some of the work being conducted by research partners ogi , including how the two groups should best work together. the group also briefly touched upon resource issues. speaker mn007 would like to investigate increasing the context of the phonemes. speaker mn013 does agree with mn007s assessment of the outcome , and points out the lack of data , but acknowledges that if mn007 is interested he will go ahead with it. must be careful in choosing which experiments to perform as an important visitor is coming soon. also need to come up with a stronger plan for collaboration with ogi. must decide what from both can be brought together , and how then can the work be divided. someone ( implied with gestures in the meeting ) must speak to a person outside the group with regards to using a multiprocessor linux machine that is available. debugging the process while there are just two processors bodes well for when they have 8 to multi-thread. speaker mn026 volunteers to get some training running under linux. it is agreed that he should start with the neural net training , then work on htk. incorrect assumptions were made when considering the on-line normalization for the main task . members used different values to a previous study , and whilst it was believed not to make a difference, it does , so networks are being retrained. currently working with noise conditions being the same in training and test data , but there is nothing which matches the noise on the italian test data. in fact no other language matches the noise from aurora data. spanish was being used to train for italian as it was assumed they were the most similar , but that may not be as close a match as thought. ogi have an interesting approach to voice activation detection for removing blocks of silence , that shows good results , but currently the word model being used is too poor to make good use of this and no one is working on improving it. speakers mn007 and fn002 have been running experiments. looking at different features , under different training conditions. moving from training with task data to broad data increases error rate by 10% , and moving to multiple languages increases a further 20-30%. plp with jrasta better than just plp on mismatched conditions , but slightly worse on well matched. speaker fn002 is also looking at the htk training , but does not yet have results. speaker mn007 is going to start work on creating broad phonetic categories based on various features , and combine this with original features like plp. as yet unsure how to combine the data however.
###dialogue: grad f: , maybe it 's the turning off and turning on of the mike , right ? professor b: , that 's the mike number there , mike number five , and channel four . professor b: yes , ok . so i also copied the results that we all got in the mail from ogi and we 'll go through them also . so where are we on our runs ? phd d: so . we so as i was already said , we mainly focused on four features . phd d: the plp , the plp with jrasta , the msg , and the mfcc from the baseline aurora . phd d: , and we focused for the test part on the english and the italian . we ' ve trained several neural networks on so on the ti - digits english and on the italian data and also on the broad english french and spanish databases . mmm , so there 's our result tables here , for the tandem approach , and , actually what we @ observed is that if the network is trained on the task data it works pretty . professor b: he 's facing this way . what ? ok , this would be a good section for our silence detection . phd d: , so if the network is trained on the task data tandem works pretty . and actually we have , results are similar only on , phd a: do you mean if it 's trained only on on data from just that task , that language ? phd d: just that task . but actually we did n't train network on both types of data phonetically ba phonetically balanced data and task data . professor b: but the question is how much worse is it if you have broad data ? , my assump from what i saw from the earlier results , i last week , was that , if you trained on one language and tested on another , say , that the results were relatively poor . professor b: but but the question is if you train on one language but you have a broad coverage and then test in another , does that is that improve things i c in comparison ? professor b: no , no . different lang so if you train on ti - digits and test on italian digits , you do poorly , let 's say . professor b: so i ' m just imagining . e so , you did n't train on timit and test on italian digits , say ? phd d: we no , we did four testing , actually . the first testing is with task data so , with nets trained on task data . so for italian on the italian speech @ . the second test is trained on a single language with broad database , but the same language as the t task data . but for italian we choose spanish which we assume is close to italian . the third test is by using , the three language database phd d: and the fourth test is excluding from these three languages the language that is the task language . professor b: , ok , so , that is what i wanted to know . was n't saying it very , i . phd d: , . so for ti - digits for ins example when we go from ti - digits training to timit training we lose around ten percent , . the error rate increase u of ten percent , relative . phd d: so this is not so bad . and then when we jump to the multilingual data it 's it become worse and , around , let 's say , twenty perc twenty percent further . phd a: and so , remind me , the multilingual is just the broad data . right ? it 's not the digits . so it 's the combination of two things there . it 's removing the task specific training and it 's adding other languages . phd d: so , when it 's trained on the multilingual broad data or number so , the ratio of our error rates with the baseline error rate is around one point one . professor b: yes . and it 's something like one point three of the i i if you compare everything to the first case at the baseline , you get something like one point one for the using the same language but a different task , and something like one point three for three languages broad . phd d: no no . same language we are at for at english at o point eight . so it improves , compared to the baseline . but le - let me . professor b: ok , fine . let 's let 's use the conventional meaning of baseline . i by baseline here i meant using the task specific data . professor b: but , because that 's what you were just doing with this ten percent . so i was just trying to understand that . so if we call a factor of w just one , just normalized to one , the word error rate that you have for using ti - digits as training and ti - digits as test , different words , i ' m , but , the same task and so on . if we call that " one " , then what you 're saying is that the word error rate for the same language but using different training data than you 're testing on , say timit and , it 's one point one . professor b: and if it 's you do go to three languages including the english , it 's something like one point three . that 's what you were just saying , . phd d: if we exclude english , there is not much difference with the data with english . professor b: that 's interesting . do you see ? because , so no , that 's important . so what it 's saying here is just that yes , bro004bdialogueact254 594747 602997 b professor s%-- -1 0 there is a reduction in performance , when you do n't have the s when you do n't have um bro004adialogueact255 604185 604765 a phd s^2 -1 0 task data . bro004bdialogueact256 607002 607312 b professor s^co -1 0 wait a minute , bro004bdialogueact257 607312 608322 b professor s%-- -1 0 th the bro004ddialogueact258 611155 611665 d phd b -1 0 hmm . bro004bdialogueact259 612719 614399 b professor s^ar|s^ba -1 0 no , actually it 's interesting . bro004bdialogueact260 614399 617829 b professor s -1 0 so it 's so when you go to a different task , there 's actually not so different . bro004bdialogueact261 617829 618739 b professor s%-- -1 0 it 's when you went to these bro004bdialogueact262 618869 620769 b professor qw -1 0 so what 's the difference between two and three ? bro004bdialogueact263 621009 623109 b professor qw^e^rt -1 0 between the one point one case and the one point four case ? bro004bdialogueact264 623109 623739 b professor s -1 0 i ' m confused . bro004adialogueact265 626848 627668 a phd s -1 0 it 's multilingual . bro004ddialogueact266 629317 629557 d phd s^aa -1 0 yeah . bro004ddialogueact267 629557 632107 d phd s^m^na^rt -1 0 the only difference it 's is that it 's multilingual bro004ddialogueact268 633057 634347 d phd fh -1 0 um bro004bdialogueact269 633735 637295 b professor s -1 0 cuz in both of those cases , you do n't have the same task . bro004ddialogueact270 634347 635407 d phd b -1 0 yeah . bro004ddialogueact271 638007 638447 d phd s^aa -1 0 yeah . bro004bdialogueact272 638794 642434 b professor qy%-- -1 0 so is the training data for the for this one point four case bro004bdialogueact273 643404 646174 b professor qy^rt -1 0 does it include the training data for the one point one case ? bro004ddialogueact274 647436 647836 d phd h|s^aa -1 0 uh . bro004fdialogueact275 648263 649463 f grad fg|s^arp -1 0 yeah , a fraction of it . bro004ddialogueact276 650056 650726 d phd s^arp -1 0 a part of it , bro004ddialogueact277 650726 650826 d phd b -1 0 yeah . bro004bdialogueact278 655089 656339 b professor qw -1 0 how m how much bigger is it ? bro004ddialogueact279 65925 663188 d phd h|s -1 0 um it 's two times , bro004fdialogueact280 661792 662842 f grad h -1 0 yeah , . bro004ddialogueact281 663188 663588 d phd s^rt%-- -1 0 actually ? bro004ddialogueact282 663798 664018 d phd b -1 0 yeah . bro004ddialogueact283 665663 675907 d phd h|s -1 0 um . the english data no , the multilingual databases are two times the broad english data . bro004ddialogueact284 678015 681165 d phd s -1 0 we just wanted to keep this , w , not too huge . bro004ddialogueact285 681165 681495 d phd fh -1 0 so . bro004bdialogueact286 682481 684181 b professor s -1 0 so it 's two times , bro004bdialogueact287 684441 686901 b professor s^bu -1 0 but it includes the broad english data . bro004ddialogueact288 684908 685318 d phd s^aa -1 0 i . bro004ddialogueact289 685318 685508 d phd qy%- -1 0 do you bro004ddialogueact290 687621 689196 d phd h|s^aa -1 0 uh , . bro004bdialogueact291 689882 692542 b professor s -1 0 and the broad english data is what you got this one point one with . bro004bdialogueact292 692642 693782 b professor s^bu -1 0 so that 's timit basically bro004bdialogueact293 693782 694032 b professor qy^d^g^rt -1 0 right ? bro004ddialogueact294 694075 694195 d phd s^aa -1 0 yeah . bro004fdialogueact295 694113 694443 f grad b -1 0 mm - . bro004bdialogueact296 694691 695991 b professor s -1 0 so it 's band - limited timit . bro004bdialogueact299 697128 699794 b professor s -1 0 this is all eight kilohertz sampling . bro004ddialogueact298 696949 697319 d phd b -1 0 mm - . bro004fdialogueact297 696615 697025 f grad b -1 0 mm - . bro004ddialogueact300 697904 698014 d phd b -1 0 yeah . bro004fdialogueact301 698751 699961 f grad % - -1 0 downs bro004fdialogueact302 699961 700211 f grad s^aa -1 0 right . bro004bdialogueact303 700842 707212 b professor s -1 0 so you have band - limited timit , gave you almost as good as a result as using ti - digits on a ti - digits test . bro004bdialogueact304 708299 708679 b professor b -1 0 ok ? bro004ddialogueact305 708985 709335 d phd b -1 0 hmm ? bro004bdialogueact306 709636 712096 b professor h -1 0 um and um bro004bdialogueact307 712776 722266 b professor s -1 0 but , when you add in more training data but keep the neural net the same size , it performs worse on the ti - digits . bro004bdialogueact308 723156 727126 b professor s^bu^rt -1 0 ok , now all of this is this is noisy ti - digits , i assume ? bro004bdialogueact310 72785 72881 b professor qy^d^g^rt -1 0 both training and test ? bro004ddialogueact309 727535 727735 d phd s^aa -1 0 bro004bdialogueact311 729633 729893 b professor s^aa -1 0 yeah . bro004bdialogueact312 729953 730253 b professor s^bk -1 0 ok . bro004bdialogueact313 731628 732788 b professor h -1 0 um bro004bdialogueact314 736507 741188 b professor s^bk|s%-- -1 0 ok . we we may just need to bro004bdialogueact315 741879 746129 b professor s -1 0 so it 's interesting that h going to a different task did n't seem to hurt us that much , bro004bdialogueact316 746129 750261 b professor s%-- -1 0 and going to a different language um bro004bdialogueact317 752732 753802 b professor s^rt%-- -1 0 it does n't seem to matter bro004bdialogueact318 753802 756342 b professor s^df -1 0 the difference between three and four is not particularly great , bro004bdialogueact319 756342 760912 b professor s -1 0 so that means that whether you have the language in or not is not such a big deal . bro004ddialogueact320 761212 762142 d phd b -1 0 mmm . bro004bdialogueact321 762256 773026 b professor s^cs -1 0 it sounds like we may need to have more of things that are similar to a target language bro004bdialogueact322 773026 776786 b professor s -1 0 or . you have the same number of parameters in the neural net , bro004bdialogueact323 776786 778646 b professor s -1 0 you have n't increased the size of the neural net , bro004bdialogueact324 778896 786656 b professor s -1 0 and maybe there 's just not enough complexity to it to represent the variab increased variability in the training set . bro004bdialogueact325 786656 787786 b professor s%-- -1 0 that that could be . bro004bdialogueact326 789504 790994 b professor h|qw^tc%-- -1 0 um so , what about bro004bdialogueact327 791144 798544 b professor qr^d^rt%-- -1 0 so these are results with th that you 're describing now , that they are pretty similar for the different features or bro004ddialogueact328 798694 799774 d phd h|s -1 0 uh , let me check . bro004ddialogueact329 799774 800204 d phd h -1 0 uh . bro004bdialogueact330 800749 801059 b professor b -1 0 yeah . bro004ddialogueact331 801261 803261 d phd s^rt +1 so . this was for the plp , bro004bdialogueact332 803664 804114 b professor s^bk -1 0 yeah . bro004ddialogueact333 803911 806601 d phd s%-- -1 0 um . the bro004ddialogueact334 806601 806891 d phd b -1 0 yeah . bro004ddialogueact335 807231 811171 d phd s%-- +1 for the plp with jrasta the we bro004ddialogueact336 811291 820051 d phd s^rt +1 this is quite the same tendency , with a slight increase of the error rate , if we go to timit . bro004ddialogueact337 820841 824241 d phd s^rt +1 and then it 's it gets worse with the multilingual . bro004ddialogueact338 825301 828919 d phd h|s^bk -1 0 um . bro004ddialogueact339 828919 832639 d phd s^rt +1 there there is a difference actually with b between plp and jrasta bro004ddialogueact340 832639 844559 d phd s^rt +1 is that jrasta seems to perform better with the highly mismatched condition but slightly worse for the matched condition . bro004ddialogueact341 845921 846631 d phd b -1 0 mmm . bro004bdialogueact342 846892 847942 b professor s^cs -1 0 i have a suggestion , actually , bro004bdialogueact343 847942 849132 b professor s -1 0 even though it 'll delay us slightly , bro004bdialogueact344 849132 852952 b professor qy^cs^rt -1 0 would you mind running into the other room and making copies of this ? bro004bdialogueact345 852952 854092 b professor s^df%-- -1 0 cuz we 're all bro004bdialogueact347 854272 856662 b professor s^df%-- -1 0 if we c if we could look at it , while we 're talking , it 'd be bro004ddialogueact346 853826 854536 d phd s^aa -1 0 yeah , . bro004ddialogueact348 854686 856066 d phd s^bk -1 0 ok . bro004bdialogueact349 856662 862412 b professor s^cc^j -1 0 uh , i 'll sing a song or dance while you do it , too . bro004adialogueact351 861351 862371 a phd fg -1 0 so bro004fdialogueact350 861088 861598 f grad s^bk -1 0 alright . bro004adialogueact352 862731 863041 a phd s^co -1 0 go ahead . bro004adialogueact353 863201 865811 a phd s^cc -1 0 ah , while you 're gone i 'll ask s some of my questions . bro004bdialogueact354 863871 864091 b professor b -1 0 yeah . bro004adialogueact355 866519 866899 a phd hx -1 0 um . bro004bdialogueact356 867411 867601 b professor b -1 0 yeah . bro004bdialogueact357 86912 872097 b professor s^t3 -1 0 uh , this way and just slightly to the left , bro004bdialogueact358 872097 872237 b professor b -1 0 yeah . bro004adialogueact359 873996 878796 a phd qy^rt -1 0 the what was was this number forty bro004adialogueact360 879016 881236 a phd qrr^d^rt -1 0 or it was roughly the same as this one , he said ? bro004adialogueact362 881576 884086 a phd s^rt -1 0 when you had the two language versus the three language ? bro004bdialogueact361 881429 881839 b professor h -1 0 um . bro004bdialogueact363 88412 88498 b professor s^na -1 0 that 's what he was saying . bro004adialogueact364 88495 88617 a phd s^bu -1 0 that 's where he removed english , bro004fdialogueact365 885039 885379 f grad b -1 0 yeah . bro004adialogueact366 88617 88655 a phd qy^d^g^rt -1 0 right ? bro004bdialogueact367 886641 887041 b professor s^aa^m -1 0 right . bro004fdialogueact368 887448 890308 f grad s -1 0 it sometimes , actually , depends on what features you 're using . bro004bdialogueact369 890685 890985 b professor fg -1 0 yeah . bro004bdialogueact370 891725 893535 b professor s%-- -1 0 but but i it sounds like bro004fdialogueact371 891996 930033 f grad b -1 0 um , but he - . bro004bdialogueact372 894395 895285 b professor s^ba -1 0 i mean . that 's interesting bro004bdialogueact373 895285 907235 b professor s -1 0 because it seems like what it 's saying is not so much that you got hurt because you did n't have so much representation of english , bro004bdialogueact374 907895 910645 b professor s -1 0 because in the other case you do n't get hurt any more , bro004bdialogueact375 910805 917315 b professor s -1 0 at least when it seemed like it might simply be a case that you have something that is just much more diverse , bro004adialogueact376 917348 917828 a phd b -1 0 mm - . bro004bdialogueact377 917655 920005 b professor s -1 0 but you have the same number of parameters representing it . bro004adialogueact378 920099 920619 a phd fg -1 0 mm - . bro004adialogueact379 921119 921589 a phd s -1 0 i wonder bro004adialogueact380 921879 926339 a phd qy^rt -1 0 were all three of these nets using the same output ? bro004adialogueact381 926339 931189 a phd qy^d^rt -1 0 this multi - language labelling ? bro004fdialogueact382 931206 934546 f grad s -1 0 he was using sixty - four phonemes from sampa . bro004adialogueact383 934959 935879 a phd s^bk -1 0 ok , ok . bro004fdialogueact384 935398 935808 f grad b -1 0 yeah . bro004adialogueact385 939743 950773 a phd s^bu -1 0 so this would from this you would say , it does n't really matter if we put finnish into the training of the neural net , if there 's gon na be , finnish in the test data . " , it 's it sounds , we have to be careful , cuz we have n't gotten a good result yet . and comparing different bad results can be tricky . but i it does suggest that it 's not so much cross language as cross type of speech . it 's it 's but we did , the other thing i was asking him , though , is that in the case you do have to be careful because of com compounded results . we got some earlier results in which you trained on one language and tested on another and you did n't have three , but you just had one language . so you trained on one type of digits and tested on another . didn - was n't there something of that ? where you , say , trained on spanish and tested on ti - digits , or the other way around ? something like that ? professor b: there was something like that , that he showed me last week . we 'll have to till we get professor b: , this may have been what i was asking before , stephane , but , was n't there something that you did , where you trained on one language and tested on another ? no mixture but just phd d: training on a single language , you mean , and testing on the other one ? phd d: so the only task that 's similar to this is the training on two languages , and that professor b: but we ' ve done a bunch of things where we just trained on one language . right ? , you have n't done all your tests on multiple languages . phd d: , no . either thi this is test with the same language but from the broad data , or it 's test with different languages also from the broad data , excluding the so , it 's three or three and four . phd d: . no . you mean training digits on one language and using the net to recognize on the other ? professor b: see , you showed me something like that last week . you had a you had a little professor b: so , wha what 's the this this chart this table that we 're looking at is , show is all testing for ti - digits , or ? phd d: and the first four rows is - matched , then the s the second group of four rows is mismatched , and finally highly mismatched . and then the lower part is for italian and it 's the same thing . phd d: so . it 's it 's the htk results , . so it 's htk training testings with different features and what appears in the left column is the networks that are used for doing this . professor b: , what was is that i what was it that you had done last week when you showed do you remember ? wh - when you showed me the your table last week ? phd d: you mean the htk aurora baseline ? it 's the one hundred number . it 's , all these numbers are the ratio with respect to the baseline . phd d: so , seventy point two means that we reduced the error rate by thirty percent . professor b: ok , so if we take let 's see plp with on - line normalization and delta - del so that 's this thing you have circled here in the second column , and " multi - english " refers to what ? phd d: to timit . then you have mf , ms and me which are for french , spanish and english . and , actually i forgot to say that the multilingual net are trained on features without the s derivatives but with increased frame numbers . and we can see on the first line of the table that it 's slightly worse when we do n't use delta but it 's not that much . professor b: so w so , i ' m . i missed that . what 's mf , ms and me ? professor b: so , it 's broader vocabulary . then and ok so what i ' m what i saw in your smaller chart that i was thinking of was there were some numbers i saw , that included these multiple languages and it and i was seeing that it got worse . that was all it was . you had some very limited results that at that point which showed having in these other languages . it might have been just this last category , having two languages broad that were where english was removed . so that was cross language and the result was quite poor . what i we had n't seen yet was that if you added in the english , it 's still poor . now , what 's the noise condition of the training data professor b: , this is what you were explaining . the noise condition is the same it 's the same aurora noises , in all these cases for the training . so there 's not a statistical sta a strong st statistically different noise characteristic between the training and test professor b: so there 's some a an effect from having these this broader coverage now i what we should try doing with this is try testing these on u this same thing on you probably must have this lined up to do . to try the same t with the exact same training , do testing on the other languages . on on so . , i , a minute . you have this here , for the italian . that 's right . ok , so , so . phd d: , so for the italian the results are stranger so what appears is that perhaps spanish is not very close to italian because , when using the network trained only on spanish it 's the error rate is almost twice the baseline error rate . professor b: , let 's see . is there any difference in so it 's in the so you 're saying that when you train on english and test on no , you do n't have training on english testing phd d: , for for the italian part the networks are trained with noise from aurora ti - digits , phd d: and perhaps the noise are quite different from the noises in the speech that italian . professor b: do we have any test sets in any other language that have the same noise as in the aurora ? phd a: can i ask something real quick ? in in the upper part in the english , it looks like the very best number is sixty point nine ? and that 's in the third section in the upper part under plp jrasta , the middle column ? i is that a noisy condition ? so that 's matched training ? is that what that is ? phd d: it 's no , the third part , so it 's highly mismatched . so . training and test noise are different . phd a: so why do you get your best number in would n't you get your best number in the clean case ? phd a: , ok so these are not ok , alright , i see . and then so , in the in the non - mismatched clean case , your best one was under mfcc ? that sixty - one point four ? phd d: . but it 's not a clean case . it 's a noisy case but training and test noises are the same . professor b: ok ? so , this will take some looking at , thinking about . but , what is currently running , that 's , i that just filling in the holes here or ? phd d: , no we do n't plan to fill the holes but actually there is something important , is that we made a lot of assumption concerning the on - line normalization and we just noticed recently that the approach that we were using was not leading to very good results when we used the straight features to htk . so d if you look at the left of the table , the first row , with eighty - six , one hundred , and forty - three and seventy - five , these are the results we obtained for italian with straight mmm , plp features using on - line normalization . mmm . and the , mmm what 's in the table , just at the left of the plp twelve on - line normalization column , so , the numbers seventy - nine , fifty - four and forty - two are the results obtained by pratibha with his on - line normalization her on - line normalization approach . phd d: just so these are the results of ogi with on - line normalization and straight features to htk . and the previous result , eighty - six and so on , are with our features straight to htk . phd d: so what we see that is there is that the way we were doing this was not correct , but still the networks are very good . when we use the networks our number are better that pratibha results . phd d: there were diff there were different things and , the first thing is the mmm , alpha value . so , the recursion part . , i used point five percent , which was the default value in the programs here . and pratibha used five percent . so it adapts more quickly , but , i assume that this was not important because previous results from dan and show that the both values g give the same results . it was true on ti - digits but it 's not true on italian . , second thing is the initialization of the . actually , what we were doing is to start the recursion from the beginning of the utterance . and using initial values that are the global mean and variances measured across the whole database . and pratibha did something different is that he she initialed the values of the mean and variance by computing this on the twenty - five first frames of each utterance . mmm . there were other minor differences , the fact that she used fifteen dissities instead s instead of thirteen , and that she used c - zero instead of log energy . , but the main differences concerns the recursion . so . , i changed the code and now we have a baseline that 's similar to the ogi baseline . we it it 's slightly different because i do n't exactly initialize the same way she does . actually i start , mmm , i do n't to a fifteen twenty - five frames before computing a mean and the variance to e to start the recursion . i use the on - line scheme and only start the re recursion after the twenty - five twenty - fifth frame . but , it 's similar . so i retrained the networks with these , the networks are retaining with these new features . so what i expect is that these numbers will a little bit go down but perhaps not so much because the neural networks learn perhaps to even if the features are not normalized . it it will learn how to normalize and professor b: ok , but that given the pressure of time we probably want to draw because of that especially , we wanna draw some conclusions from this , do some reductions in what we 're looking at , and make some strong decisions for what we 're gon na do testing on before next week . so do you are you w did you have something going on , on the side , with multi - band or on this , phd d: i no , i we plan to start this so , act actually we have discussed @ , these what we could do more as a research and we were thinking perhaps that the way we use the tandem is not , there is perhaps a flaw in the because we trained the networks if we trained the networks on the on a language and a t or a specific task , what we ask is to the network is to put the bound the decision boundaries somewhere in the space . phd d: and mmm and ask the network to put one , at one side of the for a particular phoneme at one side of the boundary decision boundary and one for another phoneme at the other side . and so there is reduction of the information there that 's not correct because if we change task and if the phonemes are not in the same context in the new task , the decision boundaries are not should not be at the same place . phd d: but the way the feature gives the the way the network gives the features is that it reduce completely the it removes completely the information a lot of information from the features by placing the decision boundaries at optimal places for one data but this is not the case for another data . phd d: so what we were thinking about is perhaps one way to solve this problem is increase the number of outputs of the neural networks . doing something like , phonemes within context and , context dependent phonemes . professor b: maybe . , you could make the same argument , it 'd be just as legitimate , for hybrid systems as . professor b: , but it 's still true that what you 're doing is you 're ignoring you 're coming up with something to represent , whether it 's a distribution , probability distribution or features , you 're coming up with a set of variables that are representing , things that vary w over context . , and you 're putting it all together , ignoring the differences in context . that that 's true for the hybrid system , it 's true for a tandem system . so , for that reason , when you in a hybrid system , when you incorporate context one way or another , you do get better scores . but i it 's a big deal to get that . i ' m and once you the other thing is that once you represent start representing more and more context it is much more specific to a particular task in language . so , the acoustics associated with a particular context , you may have some kinds of contexts that will never occur in one language and will occur frequently in the other , so the qu the issue of getting enough training for a particular context becomes harder . we already actually do n't have a huge amount of training data phd d: , but mmm , the way we do it now is that we have a neural network and the net network is trained almost to give binary decisions . and binary decisions about phonemes . nnn it 's professor b: almost . but it does give a distribution . it 's and it is true that if there 's two phones that are very similar , that the i it may prefer one but it will give a reasonably high value to the other , too . phd d: but so it 's almost binary decisions and the idea of using more classes is to get something that 's less binary decisions . professor b: no , but it would still be even more of a binary decision . it it 'd be even more of one . because then you would say that in that this phone in this context is a one , but the same phone in a slightly different context is a zero . professor b: that would be even more distinct of a binary decision . i actually would have thought you 'd wanna go the other way and have fewer classes . professor b: , the thing i was arguing for before , but again which i do n't think we have time to try , is something in which you would modify the code so you could train to have several outputs on and use articulatory features cuz then that would go that would be much broader and cover many different situations . but if you go to very fine categories , it 's very binary . phd d: , but , perhaps you 're right , but you have more classes so you have more information in your features . so , you have more information in the phd d: posteriors vector which means that but still the information is relevant because it 's information that helps to discriminate , if it 's possible to be able to discriminate among the phonemes in context . professor b: we could disagree about it at length but the real thing is if you 're interested in it you 'll probably try it and we 'll see . but but what i ' m more concerned with now , as an operational level , is , what do we do in four or five days ? , and so we have to be concerned with are we gon na look at any combinations of things , once the nets get retrained so you have this problem out of it . , are we going to look at multi - band ? are we gon na look at combinations of things ? , what questions are we gon na ask , now that , we should probably turn shortly to this o g i note . , how are we going to combine with what they ' ve been focusing on ? , we have n't been doing any of the l d a rasta thing . and they , although they do n't talk about it in this note , there 's , the issue of the mu law business versus the logarithm , so . so what i what is going on right now ? what 's right you ' ve got nets retraining , are there is there are there any h t k trainings testings going on ? phd e: i ' m trying the htk with , plp twelve on - line delta - delta and msg filter together . professor b: old one . so it 's using all the nets for that but again we have the hope that it we have the hope that it maybe it 's not making too much difference , phd d: , i will start work on multi - band . and we plan to work also on the idea of using both features and net outputs . and we think that with this approach perhaps we could reduce the number of outputs of the neural network . , so , get simpler networks , because we still have the features . so we have come up with different broad phonetic categories . and we have three types of broad phonetic classes . , something using place of articulation which leads to nine , broad classes . , another which is based on manner , which is also something like nine classes . and then , something that combine both , and we have twenty f twenty - five ? phd d: twenty - seven broad classes . so like , , i , like back vowels , front vowels . professor b: so you have two net or three nets ? was this ? how many how many nets do you have ? no nets . phd d: , it 's just were we just changing the labels to retrain nets with fewer out outputs . professor b: . the software currently just has a allows for , the one hot output . so you 're having multiple nets and combining them , how are you coming up with if you say if you have a place characteristic and a manner characteristic , how do you professor b: so you 're going the other way of what you were saying a bit ago instead of phd d: i do n't think this will work alone . it will get worse because , i believe the effect that of too reducing too much the information is what happens phd d: because there is perhaps one important thing that the net brings , and ogi show showed that , is the distinction between sp speech and silence because these nets are trained on - controlled condition . the labels are obtained on clean speech , and we add noise after . so this is one thing but perhaps , something intermediary using also some broad classes could bring so much more information . professor b: so so again then we have these broad classes and , somewhat broad . , it 's twenty - seven instead of sixty - four , . and you have the original features . which are plp , . and then , just to remind me , all of that goes into , that all of that is transformed by , k - kl , or ? professor b: whether you would transform together or just one . might wanna try it both ways . but that 's interesting . so that 's something that you 're you have n't trained yet but are preparing to train , and so hynek will be here monday . monday or tuesday . so , we need to choose the experiments carefully , so we can get key questions answered before then and leave other ones aside even if it leaves incomplete tables someplace , it 's really time to choose . , let me pass this out , . these are did did i interrupt you ? professor b: ok , so , something i asked so they 're doing the vad i they mean voice activity detection so again , it 's the silence so they ' ve just trained up a net which has two outputs , i believe . i asked hynek whether i have n't talked to sunil i asked hynek whether they compared that to just taking the nets we already had and summing up the probabilities . to get the speech voice activity detection , or else just using the silence , if there 's only one silence output . and , he did n't think they had , . but on the other hand , maybe they can get by with a smaller net and maybe sometimes you do n't run the other , maybe there 's a computational advantage to having a separate net , anyway . so their the results look pretty good . , not uniformly . , there 's a an example or two that you can find , where it made it slightly worse , but in all but a couple examples . phd e: but they have a question of the result . how are trained the lda filter ? how obtained the lda filter ? phd e: yes , the lda filter needs some training set to obtain the filter . maybe i exactly how they are obtained . phd e: , lda filter need a set of training to obtain the filter . and maybe for the italian , for the td te on for finnish , these filter are obtained with their own training set . professor b: yes , i . that 's that 's so that 's a very good question , then now that it i understand it . it 's , bro004bdialogueact1066 281192 281318 b professor qh^bs -1 0 where does the lda come from ? in the in earlier experiments , they had taken lda from a completely different database , phd e: , because maybe it the same situation that the neural network training with their own set . professor b: so that 's a good question . where does it come from ? , i . , but to tell you the truth , i was n't actually looking at the lda so much when i was looking at it i was mostly thinking about the vad . and , it ap what does asp ? that 's professor b: cuz there 's " baseline aurora " above it . and it 's this is mostly better than baseline , although in some cases it 's a little worse , in a couple cases . professor b: , it says what it is . but i do n't how that 's different from professor b: this was this is the same point we were at when we were up in oregon . phd d: i think it 's the c - zero using c - zero instead of log energy . phd a: they s they say in here that the vad is not used as an additional feature . professor b: . so so what they 're doing here is , i if you look down at the block diagram , they estimate they get a they get an estimate of whether it 's speech or silence , professor b: and then they have a median filter of it . and so , they 're trying to find stretches . the median filter is enforcing a i it having some continuity . you find stretches where the combination of the frame wise vad and the median filter say that there 's a stretch of silence . and then it 's going through and just throwing the data away . phd a: so it 's i do n't understand . you mean it 's throwing out frames ? before professor b: it 's throwing out chunks of frames , . there 's the median filter is enforcing that it 's not gon na be single cases of frames , or isolated frames . so it 's throwing out frames and , what i do n't understand is how they 're doing this with h t professor b: y you it stretches again . for single frames it would be pretty hard . but if you say speech starts here , speech ends there . professor b: , so in the i in the decoding , you 're saying that we 're gon na decode from here to here . professor b: they 're treating it , like , it 's not isolated word , but connected , the phd a: in the text they say that this is a tentative block diagram of a possible configuration we could think of . so that sounds like they 're not doing that yet . professor b: . no they have numbers though , so they 're doing something like that . that they 're what by tha that is they 're trying to come up with a block diagram that 's plausible for the standard . in other words , it 's from the point of view of reducing the number of bits you have to transmit it 's not a bad idea to detect silence anyway . phd a: . i ' m just wondering what exactly did they do up in this table if it was n't this . professor b: but it 's that 's i certainly it would be tricky about it intrans in transmitting voice , for listening to , is that these kinds of things cut speech off a lot . and so professor b: it does introduce delays but they 're claiming that it 's within the boundaries of it . and the lda introduces delays , and b what he 's suggesting this here is a parallel path so that it does n't introduce , any more delay . i it introduces two hundred milliseconds of delay but at the same time the lda down here i wh what 's the difference between tlda and slda ? professor b: , you would know that . so . the temporal lda does include the same so that he , by saying this is a b a tentative block di diagram means if you construct it this way , this delay would work in that way and then it 'd be ok . they they clearly did actually remove silent sections in order because they got these word error rate results . so that it 's to do that in this because , it 's gon na give a better word error result and therefore will help within an evaluation . whereas to whether this would actually be in a final standard , i . , as , part of the problem with evaluation right now is that the word models are pretty bad and nobody wants has approached improving them . so it 's possible that a lot of the problems with so many insertions and would go away if they were better word models to begin with . so this might just be a temporary thing . but , on the other hand , and maybe it 's a decent idea . so the question we 're gon na wanna go through next week when hynek shows up i is given that we ' ve been if you look at what we ' ve been trying , we 're looking at , by then i , combinations of features and multi - band , and we ' ve been looking at cross - language , cross task issues . and they ' ve been not so much looking at the cross task multiple language issues . but they ' ve been looking at these issues . at the on - line normalization and the voice activity detection . and i when he comes here we 're gon na have to start deciding about what do we choose from what we ' ve looked at to blend with some group of things in what they ' ve looked at and once we choose that , how do we split up the effort ? , because we still have even once we choose , we ' ve still got another month or so , there 's holidays in the way , but the evaluation data comes january thirty - first so there 's still a fair amount of time to do things together it 's just that they probably should be somewhat more coherent between the two sites in that amount of time . phd a: when they removed the silence frames , did they insert some a marker so that the recognizer knows it 's knows when it 's time to back trace ? professor b: , see they , they 're . i the specifics of how they 're doing it . they 're they 're getting around the way the recognizer works because they 're not allowed to , change the scripts for the recognizer , i believe . professor b: . , that 's what i had thought . but i do n't think they are . that 's what the way i had imagined would happen is that on the other side , you p put some low level noise . probably do n't want all zeros . most recognizers do n't like zeros but , put some epsilon in or some rand epsilon random variable in . professor b: maybe not a constant but it does n't , do n't like to divide by the variance of that , but it 's phd a: that 's right . but something that what is something that is very distinguishable from speech . so that the silence model in htk will always pick it up . professor b: so i that 's what they would do . or else , maybe there is some indicator to tell it to start and stop , i . but whatever they did , they have to play within the rules of this specific evaluation . we c we can find out . phd a: cuz you got ta do something . otherwise , if it 's just a bunch of speech , stuck together professor b: no they 're it would do badly and it did n't so badly , so they did something . professor b: . so , ok , so this brings me up to date a bit . it hopefully brings other people up to date a bit . and , i wanna look at these numbers off - line a little bit and think about it and talk with everybody , outside of this meeting . , but no it sounds like there are the usual number of little problems and bugs and but it sounds like they 're getting ironed out . and now we 're seem to be in a position to actually , look at and compare things . so that 's pretty good . i what the one of the things i wonder about , coming back to the first results you talked about , is how much , things could be helped by more parameters . and how many more parameters we can afford to have , in terms of the computational limits . because anyway when we go to twice as much data and have the same number of parameters , particularly when it 's twice as much data and it 's quite diverse , i wonder if having twice as many parameters would help . , just have a bigger hidden layer . but i doubt it would help by forty per cent . but but just curious . how are we doing on the resources ? disk , and professor b: , we 're gon na get a replacement server that 'll be a faster server , actually . professor b: we have the little tiny ibm machine that might someday grow up to be a big ibm machine . it 's got s slots for eight , ibm was donating five , we only got two so far , processors . we had originally hoped we were getting eight hundred megahertz processors . they ended up being five fifty . so instead of having eight processors that were eight hundred megahertz , we ended up with two that are five hundred and fifty megahertz . and more are supposed to come soon and there 's only a moderate amount of dat of memory . so i do n't think anybody has been sufficiently excited by it to spend much time with it , but hopefully , they 'll get us some more parts , soon and , that 'll be once we get it populated , that 'll be a machine . we will ultimately get eight processors in there . and and a amount of memory . so it 'll be a pr pretty fast linux machine . grad g: and if we can do things on linux , some of the machines we have going already , like swede ? it seems pretty fast . but fudge is pretty fast too . professor b: , you can check with dave johnson . , it 's the machine is just sitting there . and it does have two processors , and somebody could do , check out the multi - threading libraries . and i it 's possible that the , i the prudent thing to do would be for somebody to do the work on getting our code running on that machine with two processors even though there are n't five or eight . there 's there 's gon na be debugging hassles and then we 'd be set for when we did have five or eight , to have it really be useful . but . notice how i said somebody and turned my head your direction . that 's one thing you do n't get in these recordings . you do n't get the visuals grad g: i is it mostly the neural network trainings that are slowing us down or the htk runs that are slowing us down ? professor b: , yes . , is n't that right ? you 're held up by both , if the if the neural net trainings were a hundred times faster you still would n't be anything running through these a hundred times faster because you 'd be stuck by the htk trainings , but if the htk they 're both it sounded like they were roughly equal ? is that about right ? grad g: because , that 'll be running linux , and sw - swede and fudge are already running linux so , i could try to get the train the neural network trainings or the htk running under linux , and to start with i ' m wondering which one i should pick first . professor b: , probably the neural net cuz it 's probably it 's they both htk we use for this aurora , it 's not clear yet what we 're gon na use for trainings , there 's the trainings is it the training that takes the time , or the decoding ? , is it about equal between the two ? for for aurora ? professor b: , i how we can i how to do we have htk source ? is that you would think that would fairly trivially the training would , anyway , th the testing i do n't think would parallelize all that . but that you could certainly do d , distributed , no , it 's the each individual sentence is pretty tricky to parallelize . but you could split up the sentences in a test set . phd a: they have a they have a thing for doing that and th they have for awhile , in h t and you can parallelize the training . and run it on several machines and it just keeps counts . and there 's something a final thing that you run and it accumulates all the counts together . i do n't what their scripts are set up to do for the aurora , but professor b: something that we have n't really settled on yet is other than this aurora , what do we do , large vocabulary training slash testing for tandem systems . cuz we had n't really done much with tandem systems for larger . cuz we had this one collaboration with cmu and we used sphinx . , we 're also gon na be collaborating with sri and we have their have theirs . so i . so the advantage of going with the neural net thing is that we 're gon na use the neural net trainings , no matter what , for a lot of the things we 're doing , professor b: whereas , w exactly which hmm gaussian - mixture - based hmm thing we use is gon na depend so with that , maybe we should go to our { nonvocalsound } digit recitation task . and , it 's about eleven fifty . canned . , i can start over here . great , could you give adam a call . tell him to he 's at two nine seven . we can @ herve 's coming tomorrow , herve will be giving a talk , talk at eleven . did , did everybody sign these consent er everybody has everyone signed a consent form before , on previous meetings ? you do n't have to do it again each time microphones off ###summary: the meeting was dominated by a discussion of the first results coming in. there have been four types of test , in which the training data varies , and a variety of input features have been tried. the process and results were explained to the group , the implications of the results discussed , and plans for moving forward were made. there was also discussion of some of the work being conducted by research partners ogi , including how the two groups should best work together. the group also briefly touched upon resource issues. speaker mn007 would like to investigate increasing the context of the phonemes. speaker mn013 does agree with mn007s assessment of the outcome , and points out the lack of data , but acknowledges that if mn007 is interested he will go ahead with it. must be careful in choosing which experiments to perform as an important visitor is coming soon. also need to come up with a stronger plan for collaboration with ogi. must decide what from both can be brought together , and how then can the work be divided. someone ( implied with gestures in the meeting ) must speak to a person outside the group with regards to using a multiprocessor linux machine that is available. debugging the process while there are just two processors bodes well for when they have 8 to multi-thread. speaker mn026 volunteers to get some training running under linux. it is agreed that he should start with the neural net training , then work on htk. incorrect assumptions were made when considering the on-line normalization for the main task . members used different values to a previous study , and whilst it was believed not to make a difference, it does , so networks are being retrained. currently working with noise conditions being the same in training and test data , but there is nothing which matches the noise on the italian test data. in fact no other language matches the noise from aurora data. spanish was being used to train for italian as it was assumed they were the most similar , but that may not be as close a match as thought. ogi have an interesting approach to voice activation detection for removing blocks of silence , that shows good results , but currently the word model being used is too poor to make good use of this and no one is working on improving it. speakers mn007 and fn002 have been running experiments. looking at different features , under different training conditions. moving from training with task data to broad data increases error rate by 10% , and moving to multiple languages increases a further 20-30%. plp with jrasta better than just plp on mismatched conditions , but slightly worse on well matched. speaker fn002 is also looking at the htk training , but does not yet have results. speaker mn007 is going to start work on creating broad phonetic categories based on various features , and combine this with original features like plp. as yet unsure how to combine the data however.
23
professor c: we we abandoned the lapel because they were not too hot , not too cold , they were , far enough away that you got more background noise , and professor c: but they were n't so close that they got quite the , the really good no , th they did n't a minute . i ' m saying that wrong . they were not so far away that they were really good representative distant mikes , but on the other hand they were not so close that they got rid of all the interference . so it was no did n't seem to be a good point to them . on the other hand if you only had to have one mike in some ways you could argue the lapel was a good choice , precisely because it 's in the middle . professor c: there 's , some kinds of junk that you get with these things that you do n't get with the lapel , little mouth clicks and breaths and are worse with these than with the lapel , but given the choice we there seemed to be very strong opinions for , getting rid of lapels . phd f: it - it 's one less than what 's written on the back of your so you should be zero , actually . professor c: and you should do a lot of talking so we get a lot more of your pronunciations . no , they do n't have a have any indian pronunciations . phd f: so what we usually do is , we typically will have our meetings and then at the end of the meetings we 'll read the digits . everybody goes around and reads the digits on the bottom of their forms . professor c: if you say so . o k . do we have anything like an agenda ? what 's going on ? i so . one thing professor c: sunil 's here for the summer , right . , so , one thing is to talk about a kick off meeting maybe and then just , i , progress reports individually , and then , plans for where we go between now and then , . phd f: i could say a few words about , some of the , compute that 's happening around here , so that people in the group know . phd f: we so we just put in an order for about twelve new machines , to use as a compute farm . and , we ordered , sun - blade - one - hundreds , and , i ' m not exactly how long it 'll take for those to come in , but , in addition , we 're running so the plan for using these is , we 're running p - make and customs here and andreas has gotten that all , fixed up and up to speed . and he 's got a number of little utilities that make it very easy to , run things using p - make and customs . you do n't actually have to write p - make scripts and things like that . the simplest thing and send an email around or , maybe i should do an faq on the web site about it . , phd f: , there 's a command , that you can use called " run command " . run dash command , run hyphen command . and , if you say that and then some job that you want to execute , it will find the fastest currently available machine , and export your job to that machine , and run it there and it 'll duplicate your environment . so you can try this as a simple test with , the l s command . so you can say " run dash command l s " , and , it 'll actually export that ls command to some machine in the institute , and , do an ls on your current directory . so , substitute ls for whatever command you want to run , and and that 's a simple way to get started using this . and , so , soon , when we get all the new machines up , e then we 'll have lots more compute to use . now th one of the things is that , each machine that 's part of the p - make and customs network has attributes associated with it . , attributes like how much memory the machine has , what its speed is , what its operating system , and when you use something like " run command " , you can specify those attributes for your program . if you only want your thing to run under linux , you can give it the linux attribute , and then it will find the fastest available linux machine and run it on that . so . you can control where your jobs go , to a certain extent , all the way down to an individual machine . each machine has an attribute which is the name of itself . so you can give that as an attribute and it 'll only run on that . if there 's already a job running , on some machine that you 're trying to select , your job will get queued up , and then when that resource , that machine becomes available , your job will get exported there . so , there 's a lot of features to it and it kinda helps to balance the load of the machines right now andreas and i have been the main ones using it and we 're . the sri recognizer has all this p - make customs built into it . professor c: so as i understand , he 's using all the machines and you 're using all the machines , is the rough division of phd f: exactly . , i got started using the recognizer just recently i fired off a training job , and then i fired off a recognition job and i get this email about midnight from andreas saying , " , are you running two trainings simultaneously s my m my jobs are not getting run . " so i had to back off a little bit . but , soon as we get some more machines then we 'll have more compute available . that 's just a quick update about what we ' ve got . grad g: so , let 's say i have like , a thousand little jobs to do ? , how do i do it with " run command " ? do phd f: because , you do n't wanna saturate the network . so , , you should probably not run more than , say ten jobs yourself at any one time , just because then it would keep other people phd f: it 's not that so much as that , e with if everybody ran fifty jobs at once then it would just bring everything to a halt and , people 's jobs would get delayed , so it 's a sharing thing . so you should try to limit it to somet sometim some number around ten jobs at a time . so if you had a script that had a thousand things it needed to run , you 'd somehow need to put some logic in there if you were gon na use " run command " , to only have ten of those going at a time . and , then , when one of those finished you 'd fire off another one . professor c: i remember i forget whether it was when the rutgers or hopkins workshop , i remember one of the workshops i was at there were everybody was real excited cuz they got twenty - five machines and there was some p - make like thing that sit sent things out . so all twenty - five people were sending things to all twenty - five machines and things were a lot less efficient than if you 'd just use your own machine . phd f: but , you can also if you have that level of parallelization , and you do n't wanna have to worry about writing the logic in a perl script to take care of that , you can use , p - make phd f: and you write a make file that , your final job depends on these one thousand things , phd f: and when you run p - make , on your make file , you can give it the dash capital j and then a number , and that number represents how many , machines to use at once . and then it 'll make that it never goes above that . phd d: so it 's not systematically queued . all the jobs are running . if you launch twenty jobs , they are all running . alright . phd f: it depends . if you " run command " , that i mentioned before , is does n't know about other things that you might be running . phd f: and they would n't know about each other . but if you use p - make , then , it knows about all the jobs that it has to run and it can control , how many it runs simultaneously . phd f: it uses " export " underlyingly . but , if you i it 's meant to be run one job at a time ? so you could fire off a thousand of those , and it does n't know any one of those does n't know about the other ones that are running . phd f: , if you have , like , if you did n't wanna write a p - make script and you just had a , an htk training job that is gon na take , six hours to run , and somebody 's using , the machine you typically use , you can say " run command " and your htk thing and it 'll find another machine , the fastest currently available machine and run your job there . professor c: now , does it have the same behavior as p - make , which is that , if you run something on somebody 's machine and they come in and hit a key then it phd f: there are right . so some of the machines at the institute , have this attribute called " no evict " . and if you specify that , in one of your attribute lines , then it 'll go to a machine which your job wo n't be evicted from . but , the machines that do n't have that attribute , if a job gets fired up on that , which could be somebody 's desktop machine , and they were at lunch , they come back from lunch and they start typing on the console , then your machine will get evicted your job will get evicted from their machine and be restarted on another machine . automatically . so which can you to lose time , if you had a two hour job , and it got halfway through and then somebody came back to their machine and it got evicted . so . if you do n't want your job to run on a machine where it could be evicted , then you give it the minus the attribute , " no evict " , and it 'll pick a machine that it ca n't be evicted from . professor c: , what about i remember always used to be an issue , maybe it 's not anymore , that if you if something required if your machine required somebody hitting a key in order to evict things that are on it so you could work , but if you were logged into it from home ? and you were n't hitting any keys ? cuz you were , home ? phd f: i ' m not how that works . , it seems like andreas did something for that . phd f: but i whether it monitors the keyboard or actually looks at the console tty , so maybe if you echoed something to the , dev console . professor c: you probably would n't ordinarily , though . right ? you probably would n't ordinarily . professor c: you 're at home and you 're trying to log in , and it takes forever to even log you in , and you probably go , " screw this " , and . phd a: , i need a little orientation about this environment and scr s how to run some jobs here because i never d did anything so far with this x emissions so , maybe i 'll ask you after the meeting . phd f: , and also , stephane 's a really good resource for that if you ca n't find me . phd f: especially with regard to the aurora . he he knows that better than i do . professor c: , why do n't we , sunil since you 're have n't been at one of these yet , why do n't yo you tell us what 's up with you ? wh - what you ' ve been up to , hopefully . phd a: , shall i start from i how may i how , i 'll start from the post aurora submission maybe . , after the submission the what i ' ve been working on mainly was to take other s submissions and then over their system , what they submitted , because we did n't have any speech enhancement system in ours . so i tried , and u first i tried just lda . and then i found that , if i combine it with lda , it gives @ improvement over theirs . phd a: just the lda filters . plug in take the cepstral coefficients coming from their system and then plug in lda on top of that . but the lda filter that i used was different from what we submitted in the proposal . what i did was i took the lda filter 's design using clean speech , mainly because the speech is already cleaned up after the enhancement so , instead of using this , narrow band lda filter that we submitted , i got new filters . that seems to be giving , improving over their , system . slightly . but , not very significantly . and , that was , showing any improvement over final by plugging in an lda . and , so then after that i added , on - line normalization also on top of that . and that there also i n i found that i have to make some changes to their time constant that i used because th it has a mean and variance update time constant and which is not suitable for the enhanced speech , and whatever we try it on with proposal - one . but , i did n't play with that time constant a lot , t g found that i have to reduce the value , i have to increase the time constant , or reduce the value of the update value . that 's all i found so i have to . , the other thing what i tried was , , took the baseline and then ran it with the endpoint inf th information , just the aurora baseline , to see that how much the baseline itself improves by just supplying the information of the w speech and nonspeech . i found that the baseline itself improves by twenty - two percent by just giving the wuh . professor c: , can you back up a second , i missed something , i my mind wandered . ad - ad when you added the on - line normalization and , things got better again ? phd a: no , things did n't get better with the same time constant that we used . phd a: with the different time constant i found that , i did n't get an improvement over not using on - line normalization , because i found that i would have change the value of the update factor . phd a: but i did n't play it with play quite a bit to make it better than . so , it 's still not , the on - line normalization did n't give me any improvement . so , so stopped there with the , speech enhancement . the the other thing what i tried was the adding the , endpoint information to the baseline and that itself gives like twenty - two percent because the second the new phase is going to be with the endpointed speech . and just to get a feel of how much the baseline itself is going to change by adding this endpoint information , , use phd f: so people wo n't even have to worry about , doing speech - nonspeech then . phd a: that 's , that 's what the feeling is like . they 're going to give the endpoint information . professor c: everybody does that , and they wanted to see , given that you 're doing that , what are the best features that you should use . clearly they 're interact . so i that i entirely agree with it . but but it might be in some ways it might be better t to rather than giving the endpoints , to have a standard that everybody uses and then interacts with . but , . it 's it 's still someth reasonable . phd f: so , are people supposed to assume that there is are are people not supposed to use any speech outside of those endpoints ? or can you then use speech outside of it for estimating background noise and things ? phd a: that i i exactly . i that is where the consensus is . like y you will you 'll be given the information about the beginning and the end of speech but the whole speech is available to you . so . professor c: so it should make the spectral subtraction style things work even better , because you do n't have the mistakes in it . phd a: so that the baseline itself , it improves by twenty - two percent . i found that in s one of the speechdat - car cases , that like , the spanish one improves by just fifty percent by just putting the endpoint . w you do n't need any further speech enhancement with fifty . so , phd a: so that is when , the qualification criteria was reduced from fifty percent to something like twenty - five percent for - matched . and they have actually changed their qualification c criteria now . i after that , went home f had a vacation fo for four weeks . phd a: ye and i came back and i started working on , some other speech enhancement algorithm . , so i from the submission what i found that people have tried spectral subtraction and wiener filtering . these are the main , approaches where people have tried , so just to fill the space with some f few more speech enhancement algorithms to see whether it improves a lot , i ' ve been working on this , signal subspace approach for speech enhancement where you take the noisy signal and then decomposing the signal s and the noise subspace and then try to estimate the clean speech from the signal plus noise subspace . and so , i ' ve been actually running some s so far i ' ve been trying it only on matlab . i have to test whether it works first or not and then i 'll p port it to c and i 'll update it with the repository once i find it giving any some positive result . so , . professor c: s so you s you so you said one thing i want to jump on for a second . so so now you 're getting tuned into the repository thing that he has here and so we 'll have a single place where the is . so maybe , just briefly , you could remind us about the related experiments . cuz you did some that you talked about last week , i ? , where you were also combining something both of you i were both combining something from the , french telecom system with the u i whether it was system one or system two , or ? phd d: it was system one . so we the main thing that we did is just to take the spectral subtraction from the france telecom , which provide us some speech samples that are , with noise removed . professor c: so i let me just stop you there . so then , one distinction is that , you were taking the actual france telecom features and then applying something to phd a: , no there is a slight different . , which are extracted at the handset because they had another back - end blind equalization professor c: but that 's what . but u , i ' m not being clear . what i meant was you had something like cepstra , right ? phd d: but i it 's the s exactly the same thing because on the heads , handset they just applied this wiener filter and then compute cepstral features , phd a: the cepstral f the difference is like there may be a slight difference in the way phd a: because they use exactly the baseline system for converting the cepstrum once you have the speech . , if we are using our own code for th that could be the only difference . , there is no other difference . professor c: but you got some different result . so i ' m trying to understand it . but , i th phd d: , we should , have a table with all the result because i , i do n't exactly are your results ? but , but so we did this , and another difference i is that we just applied , proposal - one system after this without , with our modification to reduce the delay of the lda filters , and phd d: there are slight modifications , but it was the full proposal - one . in your case , if you tried just putting lda , then maybe on - line normalization ? phd d: so we just tried directly to just , keep the system as it was and , when we plug the spectral subtraction it improves , signif significantly . but , what seems clear also is that we have to retune the time constants of the on - line normalization . because if we keep the value that was submitted , it does n't help . you can remove on - line normalization , or put it , it does n't change anything . , as long as you have the spectral subtraction . but , you can still find some optimum somewhere , and we where exactly but , professor c: so it sounds like you should look at some tables of results and see where i where the where they were different and what we can learn from it . phd d: lda filters . there are other things that we finally were shown to improve also like , the sixty - four hertz cut - off . phd d: w , it does n't seem to hurt on ti - digits , finally . maybe because of other changes . there are some minor changes , and , right now if we look at the results , it 's , always better than it seems always better than france telecom for mismatch and high - mismatch . and it 's still slightly worse for - matched . phd d: but this is not significant . but , the problem is that it 's not significant , but if you put this in the , mmm , spreadsheet , it 's still worse . even with very minor even if it 's only slightly worse for - matched . and significantly better for hm . but , . i do n't think it 's importa important because when they will change their metric , mainly because of , when you p you plug the , frame dropping in the baseline system , it will improve a lot hm , and mm , so , i what will happen . but , the different contribution , for the different test set will be more even . phd a: because the your improvement on hm and mm will also go down significantly in the spreadsheet so . but the - matched may still the - matched may be the one which is least affected by adding the endpoint information . phd a: so the mm and hm are going to be v hugely affected by it . but they d the everything is like , but there that 's how they reduce why they reduce the qualification to twenty - five percent or some something on . phd a: , no , i they are going ahead with the same weighting . so there 's nothing on professor c: i do n't understand that . i have n't been part of the discussion , so , it seems to me that the - matched condition is gon na be unusual , professor c: in this case . unusual . because , you do n't actually have good matches ordinarily for what any @ particular person 's car is like , or phd a: , but actually the - matched condition is not like , the one in ti - digits where , you have all the training , conditions exactly like replicated in the testing condition also . it 's like , this is not calibrated by snr . the - matched has also some mismatch in that which is other than the phd a: has also some slight mismatches , unlike the ti - digits where it 's like prefectly matched phd a: the - matched is defined like it 's seventy percent of the whole database is used for training and thirty percent for testing . phd d: , so it means that if the database is large enough , it 's matched . professor c: so , , unless they deliberately chose it to be different , which they did n't because they want it to be - matched , it is , so it 's saying if you phd a: because the m the main major reason for the m the main mismatch is coming from the amount of noise and the silence frames and all those present in the database actually . professor c: so it 's i it 's saying ok , so you much as you train your dictation machine for talking into your computer , you have a car , and so you drive it around a bunch and record noise conditions , and then i do n't think that 's very realistic , i , so i they 're saying that if you were a company that was selling the commercially , that you would have a bunch of people driving around in a bunch of cars , and you would have something that was roughly similar and maybe that 's the argument , but i ' m not i buy it , so . so what else is going on ? phd d: you we are playing we are also playing , trying to put other spectral subtraction mmm , in the code . it would be a very simple spectral subtraction , on the , mel energies which i already tested but without the frame dropping actually , and it 's important to have frame dropping if you use spectral subtraction . phd f: is it is spectral subtraction typically done on the after the mel , scaling or is it done on the fft bins ? does it matter , or ? phd d: i d i . , it 's both , cases can i so - some of the proposal , we 're doing this on the bin on the fft bins , others on the , mel energies . you can do both , but i can not tell you what 's which one might be better or i phd a: i if you want to reconstruct the speech , it may be a good idea to do it on fft bins . phd a: but for speech recognition , it may not . it may not be very different if you do it on mel warped or whether you do it on fft . so you 're going to do a linear weighting anyway after that . ? so , it may not be really a big different . phd d: , it gives something different , but i what are the , pros and cons of both . phd a: it i - . the other thing is like when you 're putting in a speech enhancement technique , is it like one stage speech enhancement ? because everybody seems to have a mod two stages of speech enhancement in all the proposals , which is really giving them some improvement . they just do the same thing again once more . and so , there 's something that is good about doing it , to cleaning it up once more . professor c: maybe one or the other of the things that you 're doing would benefit from the other happening first . professor c: right , so he 's doing a signal subspace thing , maybe it would work better if you 'd already done some simple spectral subtraction , or maybe vi maybe the other way around , phd a: so i ' ve been thinking about combining the wiener filtering with signal subspace , just to see all some such permutation combination to see whether it really helps or not . professor c: how is it i ' m ignorant about this , how does , since wiener filter also assumes that you 're adding together the two signals , how is that differ from signal subspace ? phd a: the signal subspace ? the signal subspace approach has actually an in - built wiener filtering in it . phd a: it is like a kl transform followed by a wiener filter . is the signal is a signal substrate . phd a: so , the different the c the advantage of combining two things is mainly coming from the signal subspace approach does n't work very if the snr is very bad . it 's it works very poorly with the poor snr conditions , and in colored noise . professor c: i see . so essentially you could do simple spectral subtraction , followed by a kl transform , followed by a professor c: wiener filter . , in general , you do n't that 's right you do n't wanna othorg orthogonalize if the things are noisy . actually . , that was something that , herve and i were talking about with , the multi - band , that if you 're converting things to from , bands , groups of bands into cepstral coef , local cepstral coefficients that it 's not that great to do it if it 's noisy . phd a: so that 's one reason maybe we could combine s some something to improve snr a little bit , first stage , and then do a something in the second stage which could take it further . phd a: , the colored noise the v the signal subspace approach has , it actually depends on inverting the matrices . so it ac the covariance matrix of the noise . so if it is not positive definite , it has a it 's it does n't behave very if it is not positive definite ak it works very with white noise because we know for that it has a positive definite . phd a: so the way they get around is like they do an inverse filtering , first of the colo colored noise and then make the noise white , and then finally when you reconstruct the speech back , you do this filtering again . professor c: i was only half kidding . if you do the s spectral subtraction , that also gets rid and then you then add a little bit l noise addition , that what j jrasta does , in a way . if you look at what jrasta doing essentially i it 's equivalent to adding a little noise , in order to get rid of the effects of noise . phd d: , . so there is this . and maybe we find some people so that , agree to maybe work with us , and they have implementation of vts techniques so it 's , vector taylor series that are used to mmm , f to model the transformation between clean cepstra and noisy cepstra . so . , if you take the standard model of channel plus noise , it 's a nonlinear , transformation in the cepstral domain . phd d: , there is a way to approximate this using , first - order or second - order taylor series it can be used for , getting rid of the noise and the channel effect . phd a: this vts has been proposed by cmu ? is it is it the cmu ? , ok . professor c: , so at any rate , you 're looking general , standing back from it , looking at ways to combine one form or another of , noise removal , with these other things we have , looks like a worthy thing to do here . phd d: but , . but for there 's required to that requires to re - check everything else , and re - optimize the other things for the on - line normalization may be the lda filter . professor c: one of the seems like one of the things to go through next week when hari 's here , cuz hari 'll have his own ideas too or i not next week , week and a half , will be go through these alternatives , what we ' ve seen so far , and come up with some game plans . so , one way would he here are some alternate visions . one would be , you look at a few things very quickly , you pick on something that looks like it 's promising and then everybody works really hard on the same different aspects of the same thing . another thing would be to have t to pick two pol two plausible things , and , have t two working things for a while until we figure out what 's better , and then , but , w , he 'll have some ideas on that too . phd a: the other thing is to , most of the speech enhancement techniques have reported results on small vocabulary tasks . but we going to address this wall street journal in our next stage , which is also going to be a noisy task so s very few people have reported something on using some continuous speech . so , there are some , i was looking at some literature on speech enhancement applied to large vocabulary tasks and spectral subtraction does n't seems to be the thing to do for large vocabulary tasks . and it 's always people have shown improvement with wiener filtering and maybe subspace approach over spectral subtraction everywhere . but if we have to use simple spectral subtraction , we may have to do some optimization to make it work @ . professor c: so they 're making there somebody 's generating wall street journal with additive artificially added noise ? a like what they did with ti - digits , and ? phd a: i m i guenter hirsch is in charge of that . guenter hirsch and ti . maybe roger r roger , maybe in charge of . phd a: , i . there are they have there is no i if they are converging on htk or are using some mississippi state , professor c: , so that 'll be a little task in itself . we ' ve , it 's true for the additive noise , y artificially added noise we ' ve always used small vocabulary too . but for n there 's been noisy speech this larv large vocabulary that we ' ve worked with in broadcast news . so we did the broadcast news evaluation and some of the focus conditions were noisy and professor c: but we but we did n't do spectral subtraction . we were doing our funny , right ? we were doing multi , multi - stream and . but it , we di we did helped . it , did something . now we have this , meeting data . , like the we 're recording right now , and , that we have , for the , the quote - unquote noisy data there is just noisy and reverberant actually . it 's the far field mike . and , we have , the digits that we do at the end of these things . and that 's what most o again , most of our work has been done with that , with , connected digits . but , we have recognition now with some of the continuous speech , large vocabulary continuous speech , using switchboard , switchboard recognizer , no training , from this , just plain using the switchboard . professor c: that 's that 's what we 're doing , now there are some adaptation though , professor c: that , andreas has been playing with , but we 're hop , actually , dave and i were just talking earlier today about maybe at some point not that distant future , trying some of the techniques that we ' ve talked about on , some of the large vocabulary data . , i no one had done yet done test one on the distant mike using , the sri recognizer and , professor c: cuz everybody 's scared . you 'll see a little smoke coming up from the cpu trying to do it , professor c: , . but , you 're right that 's a real good point , that , we , , what if any of these ta i that 's why they 're pushing that in the evaluation . good . anything else going on ? at you guys ' end , phd b: i do n't have good result , with the inc including the new parameters , i do n't have good result . are similar or a little bit worse . phd b: i tried to include another new parameter to the traditional parameter , the coe the cepstrum coefficient , that , like , the auto - correlation , the r - zero and r - one over r - zero and another estimation of the var the variance of the difference for of the spec si , spectrum of the signal and the spectrum of time after filt mel filter bank . phd b: nuh . anyway . the first you have the sp the spectrum of the signal , and you have the on the other side you have the output of the mel filter bank . you can extend the coefficient of the mel filter bank and obtain an approximation of the spectrum of the signal . i do the difference i found a difference at the variance of this different because , suppose we think that if the variance is high , maybe you have n , noise . and if the variance is small , maybe you have , speech . to to to the idea is to found another feature for discriminate between voice sound and unvoice sound . and we try to use this new feature . and i did experiment i need to change to obtain this new feature i need to change the size the window size . of the a of the analysis window size , to have more information . phd b: , sixty - two point five milliseconds . and i do i did two type of experiment to include this feature directly with the other feature and to train a neural network to select it voice - unvoice - silence phd b: and to concat this new feature . but the result are n with the neural network i have more or less the same result . phd b: it 's neve e sometime it 's worse , sometime it 's a little bit better , but not significantly . phd b: no , i work with , italian and spanish . and if i do n't y use the neural network , and use directly the feature the results are worse . but does n't help . professor c: i really wonder though . we ' ve had these discussions before , and one of the things that struck me was that , about this line of thought that was particularly interesting to me was that we whenever you condense things , in an irreversible way , you throw away some information . and , that 's mostly viewed on as a good thing , in the way we use it , because we wanna suppress things that will variability for particular , phonetic units . but , you 'll do throw something away . and so the question is , can we figure out if there 's something we ' ve thrown away that we should n't have . and . when they were looking at the difference between the filter bank and the fft that was going into the filter bank , i was thinking " , ok , so they 're picking on something they 're looking on it to figure out noise , or voice voiced property whatever . " so that 's interesting . maybe that helps to drive the thought process of coming up with the features . but for me the interesting thing was , " , but is there just something in that difference which is useful ? " so another way of doing it , maybe , would be just to take the fft , power spectrum , and feed it into a neural network , professor c: no the just the same way we 're using , the same way that we 're using the filter bank . professor c: exact way the same way we 're using the filter bank . , the filter bank is good for all the reasons that we say it 's good . but it 's different . and , maybe if it 's used in combination , it will get at something that we 're missing . and maybe , using , orth , klt , or , adding probabilities , all th all the different ways that we ' ve been playing with , that we would let the essentially let the neural network determine what is it that 's useful , that we 're missing here . professor c: , that 's probably why y i it would be unlikely to work as by itself , but it might help in combination . but i have to tell you , i ca n't remember the conference , but , it 's about ten years ago , i remember going to one of the speech conferences i saw within very short distance of one another a couple different posters that showed about the wonders of some auditory inspired front - end , and a couple posters away it was somebody who compared one to , just putting in the fft and the fft did slightly better . so the i it 's true there 's lots of variability , but again we have these wonderful statistical mechanisms for quantifying that a that variability , and , doing something reasonable with it . it - it 's same , argument that 's gone both ways about , we have these data driven filters , in lda , and on the other hand , if it 's data driven it means it 's driven by things that have lots of variability , and that are necessarily not necessarily gon na be the same in training and test , so , in some ways it 's good to have data driven things , and in some ways it 's bad to have data driven things . professor c: part of what we 're discovering , is ways to combine things that are data driven than are not . , so anyway , it 's just a thought , that if we had that maybe it 's just a baseline , which would show us " , what are we really getting out of the filters " , or maybe i probably not by itself , but in combination , maybe there 's something to be gained from it , and let the but , y you ' ve only worked with us for a short time , maybe in a year or two you w you will actually come up with the right set of things to extract from this information . but , maybe the neural net and the h m ms could figure it out quicker than you . phd a: what one p one thing is like what before we started using this vad in this aurora , the th what we did was like , i most of about this , adding this additional speech - silence bit to the cepstrum and training the hmm on that . that is just a binary feature and that seems to be improving a lot on the speechdat - car where there is a lot of noise but not much on the ti - digits . so , a adding an additional feature to distin to discriminate between speech and nonspeech was helping . that 's it . phd a: , we actually added an additional binary feature to the cepstrum , just the baseline . phd a: , in the case of ti - digits it did n't actually give us anything , because there was n't any f anything to discriminate between speech , and it was very short . but italian was like very it was a huge improvement on italian . phd d: but anyway the question is even more , is within speech , can we get some features ? are we drop dropping information that can might be useful within speech , . to maybe to distinguish between voice sound and unvoiced sounds ? professor c: and it 's particularly more relevant now since we 're gon na be given the endpoints . phd a: there was a paper in icassp this icassp over the extracting some higher - order , information from the cepstral coefficients and i forgot the name . some is some harmonics i , i can pull that paper out from icassp . phd a: it wa it was taking the , it was about finding the higher - order moments of and i ' m not about whether it is the higher - order moments , or phd a: , he was showing up some something on noisy speech , some improvement on the noisy speech . some small vocabulary tasks . so it was on plp derived cepstral coefficients . professor c: , but again you could argue that th that 's exactly what the neural network does . so n neural network , is in some sense equivalent to computing , higher - order moments of what you professor c: , it does n't do it very specifically , pretty . but . anything on your end you want to talk about ? grad g: , nothing i wanna really talk about . i can just , share a little bit sunil has n't heard about , what i ' ve been doing . so , i told you i was getting prepared to take this qualifier exam . so that 's just , trying to propose , your next your following years of your phd work , trying to find a project to define and to work on . so , i ' ve been , looking into , doing something about r , speech recognition using acoustic events . so , the idea is you have all these different events , voicing , nasality , r - coloring , burst or noise , frication , that kinda , building robust , primary detectors for these acoustic events , and using the outputs of these robust detectors to do speech recognition . and , these primary detectors , will be , inspired by , multi - band techniques , doing things , similar to larry saul 's work on , graphical models to detect these , acoustic events . and , so i been thinking about that and some of the issues that i ' ve been running into are , exactly what acoustic events i need , what acoustic events will provide a good enough coverage to in order to do the later recognition steps . and , also , once i decide a set of acoustic events , h how do i get labels ? training data for these acoustic events . and , then later on down the line , start playing with the models themselves , the primary detectors . , i kinda see like , after building the primary detectors i see , myself taking the outputs and feeding them in , sorta tandem style into a , gaussian mixtures hmm back - end , and doing recognition . so , that 's just generally what i ' ve been looking at . professor c: by , the voiced - unvoiced version of that could tie right in to what carmen was looking at . , if you if a multi - band approach was helpful as it is , it seems to be helpful for determining voiced - unvoiced , that one might be another thing . grad g: were you gon na say something ? it looked ok , never mind . , . and so , this past week , i ' ve been , looking a little bit into , traps , and doing traps on these e events too , just , seeing if that 's possible . and , other than that , i was kicked out of i - house for living there for four years . professor c: no . so you live in a cardboard box in the street now or , no ? grad g: , s som something like that . in albany , and . that 's it . professor c: suni - i d ' you v did you find a place ? is that out of the way ? phd a: not yet . , yesterday i called up a lady who ha who will have a vacant room from may thirtieth and she said she 's interviewing two more people . and she would get back to me on monday . so that 's only thing i have and diane has a few more houses . she 's going to take some pictures and send me after i go back . so it 's that 's professor c: they 're available , and they 'll be able to get you something , so worst comes to worst we 'll put you up in a hotel for a while phd a: so , in that case , i ' m going to be here on thirty - first definitely . grad e: , if you 're in a desperate situation and you need a place to stay , you could stay with me for a while . i ' ve got a spare bedroom right now . phd a: . ok . that is of you . so , it may be he needs more than me . grad g: r no , no . my my cardboard box is actually a spacious two bedroom apartment . professor c: dave . do y wanna say anything about you you actually been , last week you were doing this with pierre , you were mentioning . is that something worth talking about , grad e: , it 's , it i do n't think it directly relates . , so , i was helping a speech researcher named pierre divenyi and he 's int he wanted to , look at , how people respond to formant changes , . so he created a lot of synthetic audio files of vowel - to - vowel transitions , and then he wanted a psycho - acoustic , spectrum . and he wanted to look at , how the energy is moving over time in that spectrum and compare that to the listener tests . and , . so , i gave him a plp spectrum . and to he t wanted to track the peaks so he could look at how they 're moving . so i took the , plp lpc coefficients and , i found the roots . this was something that stephane suggested . i found the roots of the , lpc polynomial to , track the peaks in the , plp lpc spectra . phd a: , no . so you just instead of the log you took the root square , cubic root . what di w i did n't get that . professor c: except what they call line spectral pairs they push it towards the unit circle , do n't they , to ? but it but , . but what we 'd used to do w when i did synthesis at national semiconductor twenty years ago , the technique we were playing with initially was taking the lpc polynomial and , finding the roots . it was n't plp cuz hynek had n't invented it yet , but it was just lpc , and , we found the roots of the polynomial , and th when you do that , sometimes they 're f they 're what most people call formants , sometimes they 're not . so it 's a little , formant tracking with it can be a little tricky cuz you get these funny values in real speech , grad e: right . so , if @ every root that 's since it 's a real signal , the lpc polynomial 's gon na have real coefficients . so that means that every root that is not a real root is gon na be a c complex pair , of a complex value and its conjugate . so for each and if you look at that on the unit circle , one of these one of the members of the pair will be a positive frequency , one will be a negative frequency , . so f for the i ' m using an eighth - order polynomial and i 'll get three or four of these pairs which give me s which gives me three or four peak positions . professor c: so if it 's from synthetic speech then maybe it 'll be cleaner . for real speech in real then what you end up having is , like i say , funny little things that are do n't exactly fit your notion of formants all that . professor c: and and what in what we were doing , which was not so much looking at things , it was ok because it was just a question of quantization . , we were just , storing it was we were doing , stored speech , quantization . but but , in your case phd d: actually you have peaks that are not at the formant 's positions , but they are lower in energy phd f: if this is synthetic speech ca n't you just get the formants directly ? h how is the speech created ? grad e: in w we could get , formant frequencies out of the synthesizer , as . and , w one thing that the , lpc approach will hopefully give me in addition , is that i might be able to find the b the bandwidths of these humps as . , stephane suggested looking at each complex pair as a like a se second - order iir filter . but i do n't think there 's a g a really good reason not to , get the formant frequencies from the synthesizer instead . except that you do n't have the psycho - acoustic modeling in that . professor c: so the actual so you 're not getting the actual formants per se . you 're getting the again , you 're getting the , you 're getting something that is , af strongly affected by the plp model . and so it 's more psycho - acoustic . so it 's a little it 's sort of a different thing . professor c: i ordinarily , in a formant synthesizer , the bandwidths as the ban , formant centers are , that 's somewhere in the synthesizer that was put in , as what you professor c: but but , you view each complex pair as essentially a second - order section , which has , band center and band width , you 're going back today and then back in a week i , great ! , welcome .
the icsi meeting recorder group at berkley have a temporary new member on loan from research partner ogi. he began the meeting by reporting his recent activities , which included looking at the new baseline system. the other members of the group also reported their recent progress in areas such as spectral subtraction and voicing detection. they also explained some of their projects to their guest. the group shall soon be taking delivery of more machines for a computation farm , and they discussed some software tools for running large processes. speaker me018 will construct an faq about the new computing tools and setup , and email details. fn002 agrees to try an alternative approach to her new feature for voicing detection. speaker mn007 has taken the spectral subtraction from another groups system , and is trying it with their own , with mixed results. he is also looking into alternative methods of removing noise. fn002 is still working on voicing detection , and has run experiments with her new feature with disappointing results. though not directly related to the groups work , speaker me006 has been putting together his proposal or a phd , and me026 has been helping another researcher with his work on formants.
###dialogue: professor c: we we abandoned the lapel because they were not too hot , not too cold , they were , far enough away that you got more background noise , and professor c: but they were n't so close that they got quite the , the really good no , th they did n't a minute . i ' m saying that wrong . they were not so far away that they were really good representative distant mikes , but on the other hand they were not so close that they got rid of all the interference . so it was no did n't seem to be a good point to them . on the other hand if you only had to have one mike in some ways you could argue the lapel was a good choice , precisely because it 's in the middle . professor c: there 's , some kinds of junk that you get with these things that you do n't get with the lapel , little mouth clicks and breaths and are worse with these than with the lapel , but given the choice we there seemed to be very strong opinions for , getting rid of lapels . phd f: it - it 's one less than what 's written on the back of your so you should be zero , actually . professor c: and you should do a lot of talking so we get a lot more of your pronunciations . no , they do n't have a have any indian pronunciations . phd f: so what we usually do is , we typically will have our meetings and then at the end of the meetings we 'll read the digits . everybody goes around and reads the digits on the bottom of their forms . professor c: if you say so . o k . do we have anything like an agenda ? what 's going on ? i so . one thing professor c: sunil 's here for the summer , right . , so , one thing is to talk about a kick off meeting maybe and then just , i , progress reports individually , and then , plans for where we go between now and then , . phd f: i could say a few words about , some of the , compute that 's happening around here , so that people in the group know . phd f: we so we just put in an order for about twelve new machines , to use as a compute farm . and , we ordered , sun - blade - one - hundreds , and , i ' m not exactly how long it 'll take for those to come in , but , in addition , we 're running so the plan for using these is , we 're running p - make and customs here and andreas has gotten that all , fixed up and up to speed . and he 's got a number of little utilities that make it very easy to , run things using p - make and customs . you do n't actually have to write p - make scripts and things like that . the simplest thing and send an email around or , maybe i should do an faq on the web site about it . , phd f: , there 's a command , that you can use called " run command " . run dash command , run hyphen command . and , if you say that and then some job that you want to execute , it will find the fastest currently available machine , and export your job to that machine , and run it there and it 'll duplicate your environment . so you can try this as a simple test with , the l s command . so you can say " run dash command l s " , and , it 'll actually export that ls command to some machine in the institute , and , do an ls on your current directory . so , substitute ls for whatever command you want to run , and and that 's a simple way to get started using this . and , so , soon , when we get all the new machines up , e then we 'll have lots more compute to use . now th one of the things is that , each machine that 's part of the p - make and customs network has attributes associated with it . , attributes like how much memory the machine has , what its speed is , what its operating system , and when you use something like " run command " , you can specify those attributes for your program . if you only want your thing to run under linux , you can give it the linux attribute , and then it will find the fastest available linux machine and run it on that . so . you can control where your jobs go , to a certain extent , all the way down to an individual machine . each machine has an attribute which is the name of itself . so you can give that as an attribute and it 'll only run on that . if there 's already a job running , on some machine that you 're trying to select , your job will get queued up , and then when that resource , that machine becomes available , your job will get exported there . so , there 's a lot of features to it and it kinda helps to balance the load of the machines right now andreas and i have been the main ones using it and we 're . the sri recognizer has all this p - make customs built into it . professor c: so as i understand , he 's using all the machines and you 're using all the machines , is the rough division of phd f: exactly . , i got started using the recognizer just recently i fired off a training job , and then i fired off a recognition job and i get this email about midnight from andreas saying , " , are you running two trainings simultaneously s my m my jobs are not getting run . " so i had to back off a little bit . but , soon as we get some more machines then we 'll have more compute available . that 's just a quick update about what we ' ve got . grad g: so , let 's say i have like , a thousand little jobs to do ? , how do i do it with " run command " ? do phd f: because , you do n't wanna saturate the network . so , , you should probably not run more than , say ten jobs yourself at any one time , just because then it would keep other people phd f: it 's not that so much as that , e with if everybody ran fifty jobs at once then it would just bring everything to a halt and , people 's jobs would get delayed , so it 's a sharing thing . so you should try to limit it to somet sometim some number around ten jobs at a time . so if you had a script that had a thousand things it needed to run , you 'd somehow need to put some logic in there if you were gon na use " run command " , to only have ten of those going at a time . and , then , when one of those finished you 'd fire off another one . professor c: i remember i forget whether it was when the rutgers or hopkins workshop , i remember one of the workshops i was at there were everybody was real excited cuz they got twenty - five machines and there was some p - make like thing that sit sent things out . so all twenty - five people were sending things to all twenty - five machines and things were a lot less efficient than if you 'd just use your own machine . phd f: but , you can also if you have that level of parallelization , and you do n't wanna have to worry about writing the logic in a perl script to take care of that , you can use , p - make phd f: and you write a make file that , your final job depends on these one thousand things , phd f: and when you run p - make , on your make file , you can give it the dash capital j and then a number , and that number represents how many , machines to use at once . and then it 'll make that it never goes above that . phd d: so it 's not systematically queued . all the jobs are running . if you launch twenty jobs , they are all running . alright . phd f: it depends . if you " run command " , that i mentioned before , is does n't know about other things that you might be running . phd f: and they would n't know about each other . but if you use p - make , then , it knows about all the jobs that it has to run and it can control , how many it runs simultaneously . phd f: it uses " export " underlyingly . but , if you i it 's meant to be run one job at a time ? so you could fire off a thousand of those , and it does n't know any one of those does n't know about the other ones that are running . phd f: , if you have , like , if you did n't wanna write a p - make script and you just had a , an htk training job that is gon na take , six hours to run , and somebody 's using , the machine you typically use , you can say " run command " and your htk thing and it 'll find another machine , the fastest currently available machine and run your job there . professor c: now , does it have the same behavior as p - make , which is that , if you run something on somebody 's machine and they come in and hit a key then it phd f: there are right . so some of the machines at the institute , have this attribute called " no evict " . and if you specify that , in one of your attribute lines , then it 'll go to a machine which your job wo n't be evicted from . but , the machines that do n't have that attribute , if a job gets fired up on that , which could be somebody 's desktop machine , and they were at lunch , they come back from lunch and they start typing on the console , then your machine will get evicted your job will get evicted from their machine and be restarted on another machine . automatically . so which can you to lose time , if you had a two hour job , and it got halfway through and then somebody came back to their machine and it got evicted . so . if you do n't want your job to run on a machine where it could be evicted , then you give it the minus the attribute , " no evict " , and it 'll pick a machine that it ca n't be evicted from . professor c: , what about i remember always used to be an issue , maybe it 's not anymore , that if you if something required if your machine required somebody hitting a key in order to evict things that are on it so you could work , but if you were logged into it from home ? and you were n't hitting any keys ? cuz you were , home ? phd f: i ' m not how that works . , it seems like andreas did something for that . phd f: but i whether it monitors the keyboard or actually looks at the console tty , so maybe if you echoed something to the , dev console . professor c: you probably would n't ordinarily , though . right ? you probably would n't ordinarily . professor c: you 're at home and you 're trying to log in , and it takes forever to even log you in , and you probably go , " screw this " , and . phd a: , i need a little orientation about this environment and scr s how to run some jobs here because i never d did anything so far with this x emissions so , maybe i 'll ask you after the meeting . phd f: , and also , stephane 's a really good resource for that if you ca n't find me . phd f: especially with regard to the aurora . he he knows that better than i do . professor c: , why do n't we , sunil since you 're have n't been at one of these yet , why do n't yo you tell us what 's up with you ? wh - what you ' ve been up to , hopefully . phd a: , shall i start from i how may i how , i 'll start from the post aurora submission maybe . , after the submission the what i ' ve been working on mainly was to take other s submissions and then over their system , what they submitted , because we did n't have any speech enhancement system in ours . so i tried , and u first i tried just lda . and then i found that , if i combine it with lda , it gives @ improvement over theirs . phd a: just the lda filters . plug in take the cepstral coefficients coming from their system and then plug in lda on top of that . but the lda filter that i used was different from what we submitted in the proposal . what i did was i took the lda filter 's design using clean speech , mainly because the speech is already cleaned up after the enhancement so , instead of using this , narrow band lda filter that we submitted , i got new filters . that seems to be giving , improving over their , system . slightly . but , not very significantly . and , that was , showing any improvement over final by plugging in an lda . and , so then after that i added , on - line normalization also on top of that . and that there also i n i found that i have to make some changes to their time constant that i used because th it has a mean and variance update time constant and which is not suitable for the enhanced speech , and whatever we try it on with proposal - one . but , i did n't play with that time constant a lot , t g found that i have to reduce the value , i have to increase the time constant , or reduce the value of the update value . that 's all i found so i have to . , the other thing what i tried was , , took the baseline and then ran it with the endpoint inf th information , just the aurora baseline , to see that how much the baseline itself improves by just supplying the information of the w speech and nonspeech . i found that the baseline itself improves by twenty - two percent by just giving the wuh . professor c: , can you back up a second , i missed something , i my mind wandered . ad - ad when you added the on - line normalization and , things got better again ? phd a: no , things did n't get better with the same time constant that we used . phd a: with the different time constant i found that , i did n't get an improvement over not using on - line normalization , because i found that i would have change the value of the update factor . phd a: but i did n't play it with play quite a bit to make it better than . so , it 's still not , the on - line normalization did n't give me any improvement . so , so stopped there with the , speech enhancement . the the other thing what i tried was the adding the , endpoint information to the baseline and that itself gives like twenty - two percent because the second the new phase is going to be with the endpointed speech . and just to get a feel of how much the baseline itself is going to change by adding this endpoint information , , use phd f: so people wo n't even have to worry about , doing speech - nonspeech then . phd a: that 's , that 's what the feeling is like . they 're going to give the endpoint information . professor c: everybody does that , and they wanted to see , given that you 're doing that , what are the best features that you should use . clearly they 're interact . so i that i entirely agree with it . but but it might be in some ways it might be better t to rather than giving the endpoints , to have a standard that everybody uses and then interacts with . but , . it 's it 's still someth reasonable . phd f: so , are people supposed to assume that there is are are people not supposed to use any speech outside of those endpoints ? or can you then use speech outside of it for estimating background noise and things ? phd a: that i i exactly . i that is where the consensus is . like y you will you 'll be given the information about the beginning and the end of speech but the whole speech is available to you . so . professor c: so it should make the spectral subtraction style things work even better , because you do n't have the mistakes in it . phd a: so that the baseline itself , it improves by twenty - two percent . i found that in s one of the speechdat - car cases , that like , the spanish one improves by just fifty percent by just putting the endpoint . w you do n't need any further speech enhancement with fifty . so , phd a: so that is when , the qualification criteria was reduced from fifty percent to something like twenty - five percent for - matched . and they have actually changed their qualification c criteria now . i after that , went home f had a vacation fo for four weeks . phd a: ye and i came back and i started working on , some other speech enhancement algorithm . , so i from the submission what i found that people have tried spectral subtraction and wiener filtering . these are the main , approaches where people have tried , so just to fill the space with some f few more speech enhancement algorithms to see whether it improves a lot , i ' ve been working on this , signal subspace approach for speech enhancement where you take the noisy signal and then decomposing the signal s and the noise subspace and then try to estimate the clean speech from the signal plus noise subspace . and so , i ' ve been actually running some s so far i ' ve been trying it only on matlab . i have to test whether it works first or not and then i 'll p port it to c and i 'll update it with the repository once i find it giving any some positive result . so , . professor c: s so you s you so you said one thing i want to jump on for a second . so so now you 're getting tuned into the repository thing that he has here and so we 'll have a single place where the is . so maybe , just briefly , you could remind us about the related experiments . cuz you did some that you talked about last week , i ? , where you were also combining something both of you i were both combining something from the , french telecom system with the u i whether it was system one or system two , or ? phd d: it was system one . so we the main thing that we did is just to take the spectral subtraction from the france telecom , which provide us some speech samples that are , with noise removed . professor c: so i let me just stop you there . so then , one distinction is that , you were taking the actual france telecom features and then applying something to phd a: , no there is a slight different . , which are extracted at the handset because they had another back - end blind equalization professor c: but that 's what . but u , i ' m not being clear . what i meant was you had something like cepstra , right ? phd d: but i it 's the s exactly the same thing because on the heads , handset they just applied this wiener filter and then compute cepstral features , phd a: the cepstral f the difference is like there may be a slight difference in the way phd a: because they use exactly the baseline system for converting the cepstrum once you have the speech . , if we are using our own code for th that could be the only difference . , there is no other difference . professor c: but you got some different result . so i ' m trying to understand it . but , i th phd d: , we should , have a table with all the result because i , i do n't exactly are your results ? but , but so we did this , and another difference i is that we just applied , proposal - one system after this without , with our modification to reduce the delay of the lda filters , and phd d: there are slight modifications , but it was the full proposal - one . in your case , if you tried just putting lda , then maybe on - line normalization ? phd d: so we just tried directly to just , keep the system as it was and , when we plug the spectral subtraction it improves , signif significantly . but , what seems clear also is that we have to retune the time constants of the on - line normalization . because if we keep the value that was submitted , it does n't help . you can remove on - line normalization , or put it , it does n't change anything . , as long as you have the spectral subtraction . but , you can still find some optimum somewhere , and we where exactly but , professor c: so it sounds like you should look at some tables of results and see where i where the where they were different and what we can learn from it . phd d: lda filters . there are other things that we finally were shown to improve also like , the sixty - four hertz cut - off . phd d: w , it does n't seem to hurt on ti - digits , finally . maybe because of other changes . there are some minor changes , and , right now if we look at the results , it 's , always better than it seems always better than france telecom for mismatch and high - mismatch . and it 's still slightly worse for - matched . phd d: but this is not significant . but , the problem is that it 's not significant , but if you put this in the , mmm , spreadsheet , it 's still worse . even with very minor even if it 's only slightly worse for - matched . and significantly better for hm . but , . i do n't think it 's importa important because when they will change their metric , mainly because of , when you p you plug the , frame dropping in the baseline system , it will improve a lot hm , and mm , so , i what will happen . but , the different contribution , for the different test set will be more even . phd a: because the your improvement on hm and mm will also go down significantly in the spreadsheet so . but the - matched may still the - matched may be the one which is least affected by adding the endpoint information . phd a: so the mm and hm are going to be v hugely affected by it . but they d the everything is like , but there that 's how they reduce why they reduce the qualification to twenty - five percent or some something on . phd a: , no , i they are going ahead with the same weighting . so there 's nothing on professor c: i do n't understand that . i have n't been part of the discussion , so , it seems to me that the - matched condition is gon na be unusual , professor c: in this case . unusual . because , you do n't actually have good matches ordinarily for what any @ particular person 's car is like , or phd a: , but actually the - matched condition is not like , the one in ti - digits where , you have all the training , conditions exactly like replicated in the testing condition also . it 's like , this is not calibrated by snr . the - matched has also some mismatch in that which is other than the phd a: has also some slight mismatches , unlike the ti - digits where it 's like prefectly matched phd a: the - matched is defined like it 's seventy percent of the whole database is used for training and thirty percent for testing . phd d: , so it means that if the database is large enough , it 's matched . professor c: so , , unless they deliberately chose it to be different , which they did n't because they want it to be - matched , it is , so it 's saying if you phd a: because the m the main major reason for the m the main mismatch is coming from the amount of noise and the silence frames and all those present in the database actually . professor c: so it 's i it 's saying ok , so you much as you train your dictation machine for talking into your computer , you have a car , and so you drive it around a bunch and record noise conditions , and then i do n't think that 's very realistic , i , so i they 're saying that if you were a company that was selling the commercially , that you would have a bunch of people driving around in a bunch of cars , and you would have something that was roughly similar and maybe that 's the argument , but i ' m not i buy it , so . so what else is going on ? phd d: you we are playing we are also playing , trying to put other spectral subtraction mmm , in the code . it would be a very simple spectral subtraction , on the , mel energies which i already tested but without the frame dropping actually , and it 's important to have frame dropping if you use spectral subtraction . phd f: is it is spectral subtraction typically done on the after the mel , scaling or is it done on the fft bins ? does it matter , or ? phd d: i d i . , it 's both , cases can i so - some of the proposal , we 're doing this on the bin on the fft bins , others on the , mel energies . you can do both , but i can not tell you what 's which one might be better or i phd a: i if you want to reconstruct the speech , it may be a good idea to do it on fft bins . phd a: but for speech recognition , it may not . it may not be very different if you do it on mel warped or whether you do it on fft . so you 're going to do a linear weighting anyway after that . ? so , it may not be really a big different . phd d: , it gives something different , but i what are the , pros and cons of both . phd a: it i - . the other thing is like when you 're putting in a speech enhancement technique , is it like one stage speech enhancement ? because everybody seems to have a mod two stages of speech enhancement in all the proposals , which is really giving them some improvement . they just do the same thing again once more . and so , there 's something that is good about doing it , to cleaning it up once more . professor c: maybe one or the other of the things that you 're doing would benefit from the other happening first . professor c: right , so he 's doing a signal subspace thing , maybe it would work better if you 'd already done some simple spectral subtraction , or maybe vi maybe the other way around , phd a: so i ' ve been thinking about combining the wiener filtering with signal subspace , just to see all some such permutation combination to see whether it really helps or not . professor c: how is it i ' m ignorant about this , how does , since wiener filter also assumes that you 're adding together the two signals , how is that differ from signal subspace ? phd a: the signal subspace ? the signal subspace approach has actually an in - built wiener filtering in it . phd a: it is like a kl transform followed by a wiener filter . is the signal is a signal substrate . phd a: so , the different the c the advantage of combining two things is mainly coming from the signal subspace approach does n't work very if the snr is very bad . it 's it works very poorly with the poor snr conditions , and in colored noise . professor c: i see . so essentially you could do simple spectral subtraction , followed by a kl transform , followed by a professor c: wiener filter . , in general , you do n't that 's right you do n't wanna othorg orthogonalize if the things are noisy . actually . , that was something that , herve and i were talking about with , the multi - band , that if you 're converting things to from , bands , groups of bands into cepstral coef , local cepstral coefficients that it 's not that great to do it if it 's noisy . phd a: so that 's one reason maybe we could combine s some something to improve snr a little bit , first stage , and then do a something in the second stage which could take it further . phd a: , the colored noise the v the signal subspace approach has , it actually depends on inverting the matrices . so it ac the covariance matrix of the noise . so if it is not positive definite , it has a it 's it does n't behave very if it is not positive definite ak it works very with white noise because we know for that it has a positive definite . phd a: so the way they get around is like they do an inverse filtering , first of the colo colored noise and then make the noise white , and then finally when you reconstruct the speech back , you do this filtering again . professor c: i was only half kidding . if you do the s spectral subtraction , that also gets rid and then you then add a little bit l noise addition , that what j jrasta does , in a way . if you look at what jrasta doing essentially i it 's equivalent to adding a little noise , in order to get rid of the effects of noise . phd d: , . so there is this . and maybe we find some people so that , agree to maybe work with us , and they have implementation of vts techniques so it 's , vector taylor series that are used to mmm , f to model the transformation between clean cepstra and noisy cepstra . so . , if you take the standard model of channel plus noise , it 's a nonlinear , transformation in the cepstral domain . phd d: , there is a way to approximate this using , first - order or second - order taylor series it can be used for , getting rid of the noise and the channel effect . phd a: this vts has been proposed by cmu ? is it is it the cmu ? , ok . professor c: , so at any rate , you 're looking general , standing back from it , looking at ways to combine one form or another of , noise removal , with these other things we have , looks like a worthy thing to do here . phd d: but , . but for there 's required to that requires to re - check everything else , and re - optimize the other things for the on - line normalization may be the lda filter . professor c: one of the seems like one of the things to go through next week when hari 's here , cuz hari 'll have his own ideas too or i not next week , week and a half , will be go through these alternatives , what we ' ve seen so far , and come up with some game plans . so , one way would he here are some alternate visions . one would be , you look at a few things very quickly , you pick on something that looks like it 's promising and then everybody works really hard on the same different aspects of the same thing . another thing would be to have t to pick two pol two plausible things , and , have t two working things for a while until we figure out what 's better , and then , but , w , he 'll have some ideas on that too . phd a: the other thing is to , most of the speech enhancement techniques have reported results on small vocabulary tasks . but we going to address this wall street journal in our next stage , which is also going to be a noisy task so s very few people have reported something on using some continuous speech . so , there are some , i was looking at some literature on speech enhancement applied to large vocabulary tasks and spectral subtraction does n't seems to be the thing to do for large vocabulary tasks . and it 's always people have shown improvement with wiener filtering and maybe subspace approach over spectral subtraction everywhere . but if we have to use simple spectral subtraction , we may have to do some optimization to make it work @ . professor c: so they 're making there somebody 's generating wall street journal with additive artificially added noise ? a like what they did with ti - digits , and ? phd a: i m i guenter hirsch is in charge of that . guenter hirsch and ti . maybe roger r roger , maybe in charge of . phd a: , i . there are they have there is no i if they are converging on htk or are using some mississippi state , professor c: , so that 'll be a little task in itself . we ' ve , it 's true for the additive noise , y artificially added noise we ' ve always used small vocabulary too . but for n there 's been noisy speech this larv large vocabulary that we ' ve worked with in broadcast news . so we did the broadcast news evaluation and some of the focus conditions were noisy and professor c: but we but we did n't do spectral subtraction . we were doing our funny , right ? we were doing multi , multi - stream and . but it , we di we did helped . it , did something . now we have this , meeting data . , like the we 're recording right now , and , that we have , for the , the quote - unquote noisy data there is just noisy and reverberant actually . it 's the far field mike . and , we have , the digits that we do at the end of these things . and that 's what most o again , most of our work has been done with that , with , connected digits . but , we have recognition now with some of the continuous speech , large vocabulary continuous speech , using switchboard , switchboard recognizer , no training , from this , just plain using the switchboard . professor c: that 's that 's what we 're doing , now there are some adaptation though , professor c: that , andreas has been playing with , but we 're hop , actually , dave and i were just talking earlier today about maybe at some point not that distant future , trying some of the techniques that we ' ve talked about on , some of the large vocabulary data . , i no one had done yet done test one on the distant mike using , the sri recognizer and , professor c: cuz everybody 's scared . you 'll see a little smoke coming up from the cpu trying to do it , professor c: , . but , you 're right that 's a real good point , that , we , , what if any of these ta i that 's why they 're pushing that in the evaluation . good . anything else going on ? at you guys ' end , phd b: i do n't have good result , with the inc including the new parameters , i do n't have good result . are similar or a little bit worse . phd b: i tried to include another new parameter to the traditional parameter , the coe the cepstrum coefficient , that , like , the auto - correlation , the r - zero and r - one over r - zero and another estimation of the var the variance of the difference for of the spec si , spectrum of the signal and the spectrum of time after filt mel filter bank . phd b: nuh . anyway . the first you have the sp the spectrum of the signal , and you have the on the other side you have the output of the mel filter bank . you can extend the coefficient of the mel filter bank and obtain an approximation of the spectrum of the signal . i do the difference i found a difference at the variance of this different because , suppose we think that if the variance is high , maybe you have n , noise . and if the variance is small , maybe you have , speech . to to to the idea is to found another feature for discriminate between voice sound and unvoice sound . and we try to use this new feature . and i did experiment i need to change to obtain this new feature i need to change the size the window size . of the a of the analysis window size , to have more information . phd b: , sixty - two point five milliseconds . and i do i did two type of experiment to include this feature directly with the other feature and to train a neural network to select it voice - unvoice - silence phd b: and to concat this new feature . but the result are n with the neural network i have more or less the same result . phd b: it 's neve e sometime it 's worse , sometime it 's a little bit better , but not significantly . phd b: no , i work with , italian and spanish . and if i do n't y use the neural network , and use directly the feature the results are worse . but does n't help . professor c: i really wonder though . we ' ve had these discussions before , and one of the things that struck me was that , about this line of thought that was particularly interesting to me was that we whenever you condense things , in an irreversible way , you throw away some information . and , that 's mostly viewed on as a good thing , in the way we use it , because we wanna suppress things that will variability for particular , phonetic units . but , you 'll do throw something away . and so the question is , can we figure out if there 's something we ' ve thrown away that we should n't have . and . when they were looking at the difference between the filter bank and the fft that was going into the filter bank , i was thinking " , ok , so they 're picking on something they 're looking on it to figure out noise , or voice voiced property whatever . " so that 's interesting . maybe that helps to drive the thought process of coming up with the features . but for me the interesting thing was , " , but is there just something in that difference which is useful ? " so another way of doing it , maybe , would be just to take the fft , power spectrum , and feed it into a neural network , professor c: no the just the same way we 're using , the same way that we 're using the filter bank . professor c: exact way the same way we 're using the filter bank . , the filter bank is good for all the reasons that we say it 's good . but it 's different . and , maybe if it 's used in combination , it will get at something that we 're missing . and maybe , using , orth , klt , or , adding probabilities , all th all the different ways that we ' ve been playing with , that we would let the essentially let the neural network determine what is it that 's useful , that we 're missing here . professor c: , that 's probably why y i it would be unlikely to work as by itself , but it might help in combination . but i have to tell you , i ca n't remember the conference , but , it 's about ten years ago , i remember going to one of the speech conferences i saw within very short distance of one another a couple different posters that showed about the wonders of some auditory inspired front - end , and a couple posters away it was somebody who compared one to , just putting in the fft and the fft did slightly better . so the i it 's true there 's lots of variability , but again we have these wonderful statistical mechanisms for quantifying that a that variability , and , doing something reasonable with it . it - it 's same , argument that 's gone both ways about , we have these data driven filters , in lda , and on the other hand , if it 's data driven it means it 's driven by things that have lots of variability , and that are necessarily not necessarily gon na be the same in training and test , so , in some ways it 's good to have data driven things , and in some ways it 's bad to have data driven things . professor c: part of what we 're discovering , is ways to combine things that are data driven than are not . , so anyway , it 's just a thought , that if we had that maybe it 's just a baseline , which would show us " , what are we really getting out of the filters " , or maybe i probably not by itself , but in combination , maybe there 's something to be gained from it , and let the but , y you ' ve only worked with us for a short time , maybe in a year or two you w you will actually come up with the right set of things to extract from this information . but , maybe the neural net and the h m ms could figure it out quicker than you . phd a: what one p one thing is like what before we started using this vad in this aurora , the th what we did was like , i most of about this , adding this additional speech - silence bit to the cepstrum and training the hmm on that . that is just a binary feature and that seems to be improving a lot on the speechdat - car where there is a lot of noise but not much on the ti - digits . so , a adding an additional feature to distin to discriminate between speech and nonspeech was helping . that 's it . phd a: , we actually added an additional binary feature to the cepstrum , just the baseline . phd a: , in the case of ti - digits it did n't actually give us anything , because there was n't any f anything to discriminate between speech , and it was very short . but italian was like very it was a huge improvement on italian . phd d: but anyway the question is even more , is within speech , can we get some features ? are we drop dropping information that can might be useful within speech , . to maybe to distinguish between voice sound and unvoiced sounds ? professor c: and it 's particularly more relevant now since we 're gon na be given the endpoints . phd a: there was a paper in icassp this icassp over the extracting some higher - order , information from the cepstral coefficients and i forgot the name . some is some harmonics i , i can pull that paper out from icassp . phd a: it wa it was taking the , it was about finding the higher - order moments of and i ' m not about whether it is the higher - order moments , or phd a: , he was showing up some something on noisy speech , some improvement on the noisy speech . some small vocabulary tasks . so it was on plp derived cepstral coefficients . professor c: , but again you could argue that th that 's exactly what the neural network does . so n neural network , is in some sense equivalent to computing , higher - order moments of what you professor c: , it does n't do it very specifically , pretty . but . anything on your end you want to talk about ? grad g: , nothing i wanna really talk about . i can just , share a little bit sunil has n't heard about , what i ' ve been doing . so , i told you i was getting prepared to take this qualifier exam . so that 's just , trying to propose , your next your following years of your phd work , trying to find a project to define and to work on . so , i ' ve been , looking into , doing something about r , speech recognition using acoustic events . so , the idea is you have all these different events , voicing , nasality , r - coloring , burst or noise , frication , that kinda , building robust , primary detectors for these acoustic events , and using the outputs of these robust detectors to do speech recognition . and , these primary detectors , will be , inspired by , multi - band techniques , doing things , similar to larry saul 's work on , graphical models to detect these , acoustic events . and , so i been thinking about that and some of the issues that i ' ve been running into are , exactly what acoustic events i need , what acoustic events will provide a good enough coverage to in order to do the later recognition steps . and , also , once i decide a set of acoustic events , h how do i get labels ? training data for these acoustic events . and , then later on down the line , start playing with the models themselves , the primary detectors . , i kinda see like , after building the primary detectors i see , myself taking the outputs and feeding them in , sorta tandem style into a , gaussian mixtures hmm back - end , and doing recognition . so , that 's just generally what i ' ve been looking at . professor c: by , the voiced - unvoiced version of that could tie right in to what carmen was looking at . , if you if a multi - band approach was helpful as it is , it seems to be helpful for determining voiced - unvoiced , that one might be another thing . grad g: were you gon na say something ? it looked ok , never mind . , . and so , this past week , i ' ve been , looking a little bit into , traps , and doing traps on these e events too , just , seeing if that 's possible . and , other than that , i was kicked out of i - house for living there for four years . professor c: no . so you live in a cardboard box in the street now or , no ? grad g: , s som something like that . in albany , and . that 's it . professor c: suni - i d ' you v did you find a place ? is that out of the way ? phd a: not yet . , yesterday i called up a lady who ha who will have a vacant room from may thirtieth and she said she 's interviewing two more people . and she would get back to me on monday . so that 's only thing i have and diane has a few more houses . she 's going to take some pictures and send me after i go back . so it 's that 's professor c: they 're available , and they 'll be able to get you something , so worst comes to worst we 'll put you up in a hotel for a while phd a: so , in that case , i ' m going to be here on thirty - first definitely . grad e: , if you 're in a desperate situation and you need a place to stay , you could stay with me for a while . i ' ve got a spare bedroom right now . phd a: . ok . that is of you . so , it may be he needs more than me . grad g: r no , no . my my cardboard box is actually a spacious two bedroom apartment . professor c: dave . do y wanna say anything about you you actually been , last week you were doing this with pierre , you were mentioning . is that something worth talking about , grad e: , it 's , it i do n't think it directly relates . , so , i was helping a speech researcher named pierre divenyi and he 's int he wanted to , look at , how people respond to formant changes , . so he created a lot of synthetic audio files of vowel - to - vowel transitions , and then he wanted a psycho - acoustic , spectrum . and he wanted to look at , how the energy is moving over time in that spectrum and compare that to the listener tests . and , . so , i gave him a plp spectrum . and to he t wanted to track the peaks so he could look at how they 're moving . so i took the , plp lpc coefficients and , i found the roots . this was something that stephane suggested . i found the roots of the , lpc polynomial to , track the peaks in the , plp lpc spectra . phd a: , no . so you just instead of the log you took the root square , cubic root . what di w i did n't get that . professor c: except what they call line spectral pairs they push it towards the unit circle , do n't they , to ? but it but , . but what we 'd used to do w when i did synthesis at national semiconductor twenty years ago , the technique we were playing with initially was taking the lpc polynomial and , finding the roots . it was n't plp cuz hynek had n't invented it yet , but it was just lpc , and , we found the roots of the polynomial , and th when you do that , sometimes they 're f they 're what most people call formants , sometimes they 're not . so it 's a little , formant tracking with it can be a little tricky cuz you get these funny values in real speech , grad e: right . so , if @ every root that 's since it 's a real signal , the lpc polynomial 's gon na have real coefficients . so that means that every root that is not a real root is gon na be a c complex pair , of a complex value and its conjugate . so for each and if you look at that on the unit circle , one of these one of the members of the pair will be a positive frequency , one will be a negative frequency , . so f for the i ' m using an eighth - order polynomial and i 'll get three or four of these pairs which give me s which gives me three or four peak positions . professor c: so if it 's from synthetic speech then maybe it 'll be cleaner . for real speech in real then what you end up having is , like i say , funny little things that are do n't exactly fit your notion of formants all that . professor c: and and what in what we were doing , which was not so much looking at things , it was ok because it was just a question of quantization . , we were just , storing it was we were doing , stored speech , quantization . but but , in your case phd d: actually you have peaks that are not at the formant 's positions , but they are lower in energy phd f: if this is synthetic speech ca n't you just get the formants directly ? h how is the speech created ? grad e: in w we could get , formant frequencies out of the synthesizer , as . and , w one thing that the , lpc approach will hopefully give me in addition , is that i might be able to find the b the bandwidths of these humps as . , stephane suggested looking at each complex pair as a like a se second - order iir filter . but i do n't think there 's a g a really good reason not to , get the formant frequencies from the synthesizer instead . except that you do n't have the psycho - acoustic modeling in that . professor c: so the actual so you 're not getting the actual formants per se . you 're getting the again , you 're getting the , you 're getting something that is , af strongly affected by the plp model . and so it 's more psycho - acoustic . so it 's a little it 's sort of a different thing . professor c: i ordinarily , in a formant synthesizer , the bandwidths as the ban , formant centers are , that 's somewhere in the synthesizer that was put in , as what you professor c: but but , you view each complex pair as essentially a second - order section , which has , band center and band width , you 're going back today and then back in a week i , great ! , welcome . ###summary: the icsi meeting recorder group at berkley have a temporary new member on loan from research partner ogi. he began the meeting by reporting his recent activities , which included looking at the new baseline system. the other members of the group also reported their recent progress in areas such as spectral subtraction and voicing detection. they also explained some of their projects to their guest. the group shall soon be taking delivery of more machines for a computation farm , and they discussed some software tools for running large processes. speaker me018 will construct an faq about the new computing tools and setup , and email details. fn002 agrees to try an alternative approach to her new feature for voicing detection. speaker mn007 has taken the spectral subtraction from another groups system , and is trying it with their own , with mixed results. he is also looking into alternative methods of removing noise. fn002 is still working on voicing detection , and has run experiments with her new feature with disappointing results. though not directly related to the groups work , speaker me006 has been putting together his proposal or a phd , and me026 has been helping another researcher with his work on formants.
5
grad a: do we have to read them that slowly ? ok . sounded like a robot . , this is t grad a: three three six zero . four two zero one seven . that 's what of when of beat poetry . grad a: and he talks like that . that 's why i thi that probably is why of it that way . grad a: mike meyers is the guy . it - it 's his cute romantic comedy . that 's that 's that 's his cute romantic comedy , . the other thing that 's real funny , i 'll spoil it for you . is when he 's he works in a coffee shop , in san francisco , and he 's sitting there on this couch and they bring him this massive cup of espresso , and he 's like " excuse me i ordered the large espresso ? " grad a: do are y so you 're trying to decide who 's the best taster of tiramisu ? grad d: no ? . there was a fierce argument that broke out over whose tiramisu might be the best and so we decided to have a contest where those people who claim to make good tiramisu make them , and then we got a panel of impartial judges that will taste do a blind taste and then vote . should be fun . grad a: seems like you could put a s magic special ingredient in , so that everyone know which one was yours . then , if you were to bribe them , you could grad d: , i was thinking if y you guys have plans for sunday ? we 're we 're not it 's probably going to be this sunday , but we 're working with the weather here because we also want to combine it with some barbecue activity where we just fire it up and what whoever brings whatever , can throw it on there . so only the tiramisu is free , nothing else . grad a: , i ' m going back to visit my parents this weekend , so , i 'll be out of town . grad a: we are . is nancy s gon na show up ? mmm . wonder if these things ever emit a very , like , piercing screech right in your ear ? grad d: they are gon na get more comfortable headsets . they already ordered them . let 's get started . the should i go first , with the , data . can i have the remote control . so . on friday we had our wizard test data test and these are some of the results . this was the introduction . i actually , even though liz was kind enough to offer to be the first subject , i felt that she knew too much , so i asked litonya . just on the spur of the moment , and she was kind enough to serve as the first subject . grad d: so , this is what she saw as part of as for instr introduction , this is what she had to read aloud . , that was really difficult for her and grad d: the names and this was the first three tasks she had to master after she called the system , and then the system broke down , and those were the l i should say the system was supposed to break down and then these were the remaining three tasks that she was going to solve , with a human there are here are the results . mmm . and i will not we will skip the reading now . d . and . the reading was five minutes , exactly . and now comes the this is the phone - in phase of grad c: , can i have a question . so there 's no system , right ? like , there was a wizard for both parts , is this right ? grad d: it was bo it both times the same person . one time , pretending to be a system , one time , to pretending to be a human , which is actually not pretending . grad c: . is n't this obvious when it says " ok now you 're talking to a human " and then the human has the same voice ? grad d: no no . we u . ok , good question , but you just and see . it 's you 're gon na l learn . and the wizard sometimes will not be audible , because she was actually they there was some lapse in the wireless , we have to move her closer . grad a: is she mispronouncing " anlage " ? is it " anlaga " or " anlunga " grad d: they 're mispronouncing everything , but it 's this is the system breaking down , actually . did i call europe ? so , this is it . , if we grad d: there was a strange reflex . i have a headache . i ' m really out of it . ok , the lessons learned . the reading needs to be shorter . five minutes is just too long . , that was already anticipated by some people suggested that if we just have bullets here , they 're gon na not they 're subjects are probably not gon na going to follow the order . and she did not . professor b: s so if you just number them " one " , " two " , " three " it 's grad d: . we need to so that 's one thing . and we need a better introduction for the wizard . that is something that fey actually thought of a in the last second that sh the system should introduce itself , when it 's called . grad d: and , another suggestion , by liz , was that we , through subjects , switch the tasks . so when they have task - one with the computer , the next person should have task - one with a human , and . so we get data for that . , we have to refine the tasks more and more , which we have n't done , so far , in order to avoid this rephrasing , so where , even though w we do n't tell the person " ask blah - blah " they still try , or at least litonya tried to repeat as much of that text as possible . grad d: and my suggestion is we keep the wizard , because she did a wonderful job , grad d: in the sense that she responded quite nicely to things that were not asked for , how much is a t a bus ticket and a transfer so this is gon na happen all the time , we d you can never be . johno pointed out that we have maybe a grammatical gender problem there with wizard . grad a: i was n't whether wizard was the correct term for " not a man " . grad d: and so , some work needs to be done , but we can and this , and in case no you had n't seen it , this is what litonya looked at during the while taking the while partaking in the data collection . professor b: ok , great . so first of all , i agree that we should hire fey , and start paying her . probably pay for the time she 's put in as . , do exactly how to do that , or is lila , what exactly do we do to put her on the payroll in some way ? professor b: so why do n't you ask lila and see what she says about exactly what we do for someone in th professor b: she just graduated but anyway . so i if , i agree , she sounded fine , she a actually was , more , present and than she was in conversation , so she did a better job than i would have guessed from just talking to her . so that 's great . grad d: this is what i gave her , so this is h how to get to the student prison , and i did n't even spell it out here and in some cases i spelled it out a little bit more thoroughly , this is the information on the low sunken castle , and the amphitheater that never came up , and , so i if we give her even more , instruments to work with the results are gon na be even better . professor b: , and then as she does it she 'll learn @ . and also if she 's willing to take on the job of organizing all those subjects and that would be wonderful . and , she 's actually she 's going to graduate school in a an experimental paradigm , so this is all just fine in terms of h her learning things she 's gon na need to know , to do her career . so , i my is she 'll be r quite happy to take on that job . and , so grad d: she did n't explicitly state that so . and i told her that we gon na figure out a meeting time in the near future to refine the tasks and s look for the potential sources to find people . she also agrees that if it 's all just gon na be students the data is gon na be less valuable because of that professor b: , as i say there is this s set of people next door , it 's not hard to grad d: we 're already however , we may run into a problem with a reading task there . and , we 'll see . professor b: we could talk to the people who run it and see if they have a way that they could easily tell people that there 's a task , pays ten bucks , but you have to be comfortable reading relatively complicated . and and there 'll probably be self - selection to some extent . , so that 's good . now , i signed us up for the wednesday slot , and part of what we should do is this . so , my idea on that was , partly we 'll talk about system for the computer scientists , but partly i did want it to get the linguists involved in some of this issue about what the task is and all , what the dialogue is , and what 's going on linguistically , because to the extent that we can get them contributing , that will be good . so this issue about re - formulating things , maybe we can get some of the linguists sufficiently interested that they 'll help us with it , other linguists , if you 're a linguist , but in any case , the linguistics students and . so my idea on wednesday is partly to you , what you did today would i is just fine . you just do " this is what we did , and here 's the thing , and here 's some of the dialogue and . " but then , the other thing is we should give the computer scientists some idea of what 's going on with the system design , and where we think the belief - nets fit in and where the pieces are and like that . is is this make sense to everybody ? so , i do n't think it 's worth a lot of work , particularly on your part , to make a big presentation . i do n't think you should you do n't have to make any new powerpoint or anything . we got plenty of to talk about . and , then just see how a discussion goes . grad d: sounds good . the other two things is we ' ve can have johno tell us a little about this and we also have a l little bit on the interface , m - three - l enhancement , and then that was it , . grad a: so , what i did for this is , a pedagogical belief - net because i was i took i tried to conceptually do what you were talking about with the nodes that you could expand out so what i did was i took i made these dummy nodes called trajector - in and trajector - out that would isolate the things related to the trajector . and then there were the things with the source and the path and the goal . and i separated them out . and then i did similar things for our net to with the context and the discourse and whatnot , so we could isolate them or whatever in terms of the top layer . and then the bottom layer is just the mode . so . professor b: so , let 's , i do n't understand it . let 's go slide all the way up so we see what the p very bottom looks like , or is that it ? grad a: , there 's just one more node and it says " mode " which is the decision between the grad a: and i grouped things according to what how they would fit in to image schemas that would be related . and the two that i came up with were trajector - landmark and then source - path - goal as initial ones . and then i said , the trajector would be the person in this case probably . grad a: , we have the concept of what their intention was , whether they were trying to tour or do business or whatever , or they were hurried . that 's related to that . and then in terms of the source , the things the only things that we had on there i believe were whether actually , i might have added these cuz i do n't think we talked too much about the source in the old one but whether the where i ' m currently at is a landmark might have a bearing on whether or the " landmark - iness " of where i ' m currently at . and " usefulness " is basi means is that an institutional facility like a town hall like that 's not something that you 'd visit for tourist 's tourism 's sake or whatever . travel constraints would be something like , maybe they said they can they only wanna take a bus like that , right ? and then those are somewhat related to the path , so that would determine whether we 'd could take we would be telling them to go to the bus stop or versus walking there directly . , " goal " . similar things as the source except they also added whether the entity was closed and whether they have somehow marked that is was the final destination . , and then if you go up , robert , so , in terms of context , what we had currently said was whether they were a businessman or a tourist of some other person . , discourse was related to whether they had asked about open hours or whether they asked about where the entrance was or the admission fee , along those lines . , prosody i do n't really i ' m not really what prosody means , in this context , so made up whether what they say is or h how they say it is that . grad a: , the parse would be what verb they chose , and then maybe how they modified it , in the sense of whether they said " i need to get there quickly " or whatever . and , in terms of world knowledge , this would just be like opening and closing times of things , the time of day it is , and whatnot . grad a: tourbook ? that would be , i , the " landmark - iness " of things , whether it 's in the tourbook or not . professor b: ch - ch . now . alright , so i understand what 's what you got . i do n't yet understand how you would use it . so let me see if ask professor b: a s no , i understand that , but so , what let 's slide back up again and see start at the bottom and oop - bo - doop - boop . so , you could imagine w , go ahead , you were about to go up there and point to something . professor b: , ok . so , so if you if we made if we wanted to make it into a real bayes - net , that is , with fill , actually f , fill it @ in , then grad a: so we 'd have to get rid of this and connect these things directly to the mode . professor b: , here 's the problem . and and bhaskara and i was talking about this a little earlier today is , if we just do this , we could wind up with a huge , combinatoric input to the mode thing . and grad a: i , i unders i understand that , it 's hard for me to imagine how he could get around that . professor b: , i but that 's what we have to do . ok , so , there there are a variety of ways of doing it . let me just mention something that i do n't want to pursue today which is there are technical ways of doing it , i slipped a paper to bhaskara and about noisy - or 's and noisy - maxes there 're ways to back off on the purity of your bayes - net - edness . , so . if you co you could i m a and i now i that any of those actually apply in this case , but there is some technology you could try to apply . grad a: so it 's possible that we could do something like a summary node of some sort that grad a: so in that case , the sum we 'd have we , these would n't be the summary nodes . we 'd have the summary nodes like where the things were i maybe if thi if things were related to business or some other professor b: so what i was gon na say is maybe a good at this point is to try to informally , not necessarily in th in this meeting , but to try to informally think about what the decision variables are . so , if you have some bottom line decision about which mode , what are the most relevant things . and the other trick , which is not a technical trick , it 's a knowledge engineering trick , is to make the n each node sufficiently narrow that you do n't get this combinatorics . so that if you decided that you could characterize the decision as a trade - off between three factors , whatever they may be , ok ? then you could say " aha , let 's have these three factors " , and maybe a binary version f for each , or some relatively compact decision node just above the final one . and then the question would be if those are the things that you care about , can you make a relatively compact way of getting from the various inputs to the things you care about . so that y so that , you can try to do a knowledge engineering thing given that we 're not gon na screw with the technology and just always use orthodox bayes - nets , then we have a knowledge engineering little problem of how do we do that . grad a: so what i need to do is to take this one and the old one and merge them together ? so that professor b: , mmm , something . , so , robert has thought about this problem f for a long time , cuz he 's had these examples kicking around , so he may have some good intuition about , what are the crucial things . and , i understand where this the this is a way of playing with this abs source - path - goal trajector exp abstraction and sh displaying it in a particular way . , i do n't think our friends on wednesday are going to be able to , maybe they will . , let me think about whether we can present this to them or not . , grad d: , this is still , ad - hoc . this is th the second version and i look at this maybe just as a , a whatever , uml diagram or , as just a screen shot , not really as a bayes - net as john johno said . grad a: we could actually , y draw it in a different way , in the sense that it would make it more abstract . grad d: but the that , it just is a visual aid for thinking about these things which has comple clearly have to be specified m more carefully professor b: alright , le let me think about this some more , and see if we can find a way to present this to this linguists group that is helpful to them . grad d: , ultimately we may w we regard this as an exercise in thinking about the problem and maybe a first version of a module , if you wanna call it that , that you can ask , that you can give input and it 'll throw the dice for you , throw the die for you , because i integrated this into the existing smartkom system in the same way as much the same way we can have this thing . close this down . so if this is what m - three - l will look like and what it 'll give us , and a very simple thing . we have an action that he wants to go from somewhere , which is some type of object , to someplace . and this these this changed now only , it 's doing it twice now because it already did it once . , we 'll add some action type , which in this case is " approach " and could be , more refined in many ways . grad d: or we can have something where the goal is a public place and it will give us then an action type of the type " enter " . so this is just based on this one , on this one feature , and that 's about all you can do . and so in the f if this pla if the object type here is a m is a landmark , it 'll be " vista " . and this is about as much as we can do if we do n't w if we want to avoid a huge combinatorial explosion where we specify " ok , if it 's this and this but that is not the case " , and , it just gets really messy . professor b: it was much too quick for me . ok , so let me see if i understand what you 're saying . so , i do understand that you can take the m - three - l and add not and it w and you need to do this , for , we have to add , not too much about object types and , and what you did is add some rules of the style that are already there that say " if it 's of type " landmark " , then you take you 're gon na take a picture of it . " professor b: f full stop , that 's what you do . ev - every landmark you take a picture of , grad d: every public place you enter , and statue you want to go as near as possible . professor b: you enter you approach . , and certainly you can add rules like that to the existing smartkom system . and you just did , right ? professor b: , that 's a that 's another baseline case , that 's another thing " ok , here 's a another minimal way of tackling this " . add extra properties , a deterministic rule for every property you have an action , " pppt ! " you do that . , then the question would be now , if that 's all you 're doing , then you can get the types from the ontology , because that 's all you 're using is this type the types in the ontology and you 're done . right ? so we do n't use the discourse , we do n't use the context , we do n't do any of those things . alright , but that 's ok , and it 's again a one minimal extension of the existing things . and that 's something the smartkom people themselves would they 'd say " , that 's no problem , no problem to add types to the ont " grad d: this is just in order to exemplify what we can do very , very easily is , we have this silly interface and we have the rules that are as banal as of we just saw , and we have our content . now , the content i whi which is what we see here , which is the vista , schema , source , path , goal , whatever . this will be a job to find ways of writing down image schema , x - schema , constructions , in some form , and have this be in a in the content , loosely called " constructicon " . and the rules we want to throw away completely . and and here is exactly where what 's gon na be replaced with our bayes - net , which is exactly getting the input feeding into here . this decides whether it 's an whether action the enter , the vista , or the whatever professor b: that 's what you said , that 's fine . , but but it 's not construction there , it 's action . construction is a d is a different story . grad a: right . this is so what we 'd be generating would be a reference to a semantic like parameters for the x - schema ? professor b: for for yes . so that i if you had the generalized " go " x - schema and you wanted to specialize it to these three ones , then you would have to supply the parameters . and then , although we have n't worried about this yet , you might wanna worry about something that would go to the gis and use that to actually get , detailed route planning . so , where do you do take a picture of it and like that . professor b: but that 's not it 's not the immediate problem . but , presumably that functionality 's there when we professor b: , so the pro the immediate problem is back t to what you were what you are doing with the belief - net . , what are we going to use to make this decision grad a: right and then , once we ' ve made the decision , how do we put that into the content ? professor b: , that actually is relatively easy in this case . the harder problem is we decide what we want to use , how are we gon na get it ? and that the that 's the hardest problem . so , the hardest problem is how are you going to get this information from some combination of the what the person says and the context and the ontology . the h so , that 's the hardest problem at the moment is where are you gon na how are you gon na g get this information . and that 's so , getting back to here , we have a d a technical problem with the belief - nets that we do n't want all the com professor b: too many factors if we allow them to just go combinatorially . so we wanna think about which ones we really care about and what they really most depend on , and can we c , clean this up to the point where it grad a: so what we really wanna do i cuz this is really just the three layer net , we wanna b make it expand it out into more layers ? professor b: we might . , that 's certainly one thing we can do . , it 's true that the way you have this , a lot of the times you have what you 're having is the values rather than the variable . grad a: so instead of in instead it should really be just be " intention " as a node instead of " intention business " or " intention tour " . professor b: so you right , and then it would have values , " tour " , " business " , or " hurried " . but then but i it still some knowledge design to do , about i how do you wanna break this up , what really matters . , it 's fine . , we have to it 's iterative . we 're gon na have to work with it some . grad a: what was going through my mind when i did it was someone could both have a business intention and a touring intention and the probabilities of both of them happening at the same time professor b: , you could do that . and it 's perfectly ok to insist that , th , they add up to one , but that there 's that it does n't have to be one zero . so you could have the conditional p the each of these things is gon na be a probability . so whenever there 's a choice , so like landmark - ness and usefulness , professor b: right . and so that you might want to then have those b th - then they may have to be separate . they may not be able to be values of the same variable . professor b: so that 's but again , this is the knowledge design you have to go through . it 's , it 's great is , as one step toward where we wanna go . grad d: also it strikes me that we m may want to approach the point where we can try to find a , a specification for some interface , here that takes the normal m - three - l , looks at it . then we discussed in our pre - edu edu meeting how to ask the ontology , what to ask the ontology the fact that we can pretend we have one , make a dummy until we get the real one , and so we may wanna decide we can do this from here , but we also could do it if we have a belief - net interface . so the belief - net takes as input , a vector , right ? of . and it output is whatever , as . but this information is just m - three - l , and then we want to look up some more in the ontology and we want to look up some more in the maybe we want to ask the real world , maybe you want to look something up in the grs , but also we definitely want to look up in the dialogue history some s some . based on we have i was just made some examples from the ontology and so we have some information there that the town hall is both a building and it has doors and like this , but it is also an institution , so it has a mayor and we get relations out of it and once we have them , we can use that information to look in the dialogue history , were any of these things that are part of the town hall as an institution mentioned ? , were any of these that make the town hall a building mentioned ? , grad d: and , and maybe draw some inferences on that . so this may be a process of two to three steps before we get our vector , that we feed into the belief - net , professor b: there will be rules , but they are n't rules that come to final decisions , they 're rules that gather information for a decision process . no that 's just fine . , . so they 'll they presumably there 'll be a thread or process that agent , " agent " , whatever you wan wanna say , that is rule - driven , and can do things like that . and there 's an issue about whether there will be that 'll be the same agent and the one that then goes off and carries out the decision , so it probably will . my is it 'll be the same basic agent that can go off and get information , run it through a c this belief - net that turn a crank in the belief - net , that 'll come out with s more another vector , which can then be applied at what we would call the simulation or action end . so you now you 're gon na do and that may actually involve getting more information . so on once you pull that out , it could be that says " ! now that we know that we gon na go ask the ontology something else . " now that we know that it 's a bus trip , we did n't we did n't need to know beforehand , how long the bus trip takes or whatever , but now that we know that 's the way it 's coming out then we got ta go find out more . so that 's ok . grad d: so this is actually , s if we were to build something that is , and , i had one more thing , the it needs to do we come up with a code for a module that we call the " cognitive dispatcher " , which does nothing , but it looks of complect object trees and decides how are there parts missing that need to be filled out , there 's this is maybe something that this module can do , something that this module can do and then collect sub - objects and then recombine them and put them together . so maybe this is actually some useful tool that we can use to rewrite it , and get this part , then . professor b: i confess , i ' m still not completely comfortable with the overall story . i i this this is not a complaint , this is a promise to do more work . so i ' m gon na hafta think about it some more . in particular see what we 'd like to do , and this has been implicit in the discussion , is to do this in such a way that you get a lot of re - use . so . what you 're trying to get out of this deep co cognitive linguistics is the fact that w if about source , paths and goals , and nnn all this , that a lot of this is the same , for different tasks . and that there 's some important generalities that you 're getting , so that you do n't take each and every one of these tasks and hafta re - do it . and i do n't yet see how that goes . professor b: u what are the primitives , and how do you break this so i y i ' m just there saying eee you i know how to do any individual case , but i do n't yet see what 's the really interesting question is can you use deep cognitive linguistics to get powerful generalizations . grad d: maybe we sho should we a add then the " what 's this ? " domain ? , we have to how do i get to x . then we also have the " what 's this ? " domain , where we get some slightly different grad d: johno , actually , does not allow us to call them " intentions " anymore . so he dislikes the term . professor b: , i do n't like the term either , so i have n i y w i it grad d: but , i ' m the " what 's this ? " questions also create some interesting x - schema aspects . professor b: could be . i ' m not a i ' m not op particularly opposed to adding that or any other task , eventually we 're gon na want a whole range of them . professor b: i ' m just saying that i ' m gon na hafta do some first principles thinking about this . at the moment . h no . , no the bayes - nets the bayes - nets will be dec specific for each decision . but what i 'd like to be able to do is to have the way that you extract properties , that will go into different bayes - nets , be the general . so that if you have sources , you have trajectors and like that , and there 's a language for talking about trajectors , you should n't have to do that differently for going to something , than for circling it , for telling someone else how to go there , professor b: whatever it is . so that , the decision processes are gon na be different what you 'd really like is the same thing you 'd always like which is that you have a intermediate representation which looks the same o over a bunch of inputs and a bunch of outputs . so all sorts of different tasks and all sorts of different ways of expressing them use a lot of the same mechanism for pulling out what are the fundamental things going on . and that 's that would be the really pretty result . and pushing it one step further , when you get to construction grammar and , what you 'd like to be able to do is say you have this parser which is much fancier than the parser that comes with smartkom , i that actually uses constructions and is able to tell from this construction that there 's something about the intent , the actual what people wanna do or what they 're referring to and , in independent of whether it about what is this or where is it , that you could tell from the construction , you could pull out deep semantic information which you 're gon na use in a general way . so that 's the you might . you might be able to say that this i this is the construction in which the there 's let 's say there 's a cont there the land the construction implies the there 's a con this thing is being viewed as a container . so just from this local construction that you 're gon na hafta treat it as a container you might as go off and get that information . and that may effect the way you process everything else . so if you say " how do i get into the castle " then or , " what is there in the castle " or so there 's all sorts of things you might ask that involve the castle as a container and you 'd like to have this orthogonal so that anytime the castle 's referred to as a container , you crank up the appropriate . independent of what the goal is , and independent of what the surrounding language is . alright , so that 's the thesis level grad d: it 's unfortunate also that english has got rid of most of its spatial adverbs because they 're really fancy then , in for these kinds of analysis . but . grad d: , but they 're easier for parsers . parsers can pick those up but the with the spatial adverbs , they have a tough time . because the mean the semantics are very complex in that . ok , ? i had one more thing . i do n't remember . forgot it again . , b but an architecture like this would also enable us maybe to throw this away and replace it with something else , or whatever , so that we have so that this is the representational formats we 're talking about that are independent of the problem , that generalize over those problems , and are , t of a higher quality than an any actual whatever belief - net , or " x " that we may use for the decision making , ultimately . should be decoupled , . professor b: so , are we gon na be meeting here from now on ? i ' m happy to do that . we we had talked about it , cuz you have th the display and everything , that seems fine . grad d: , liz also asks whether we 're gon na have presentations every time . i do n't think we will need to do that but it 's so far it was as a visual aid for some things and professor b: no it 's worth it to ass to meet here to bring this , and assume that something may come up that we wanna look at . why not . grad d: the , she w she was definitely good in the sense that she showed us some of the weaknesses and also the fact that she was a real subject , is grad d: so that , w looking just looking at this data , listening to it , what can we get out of it in terms of our problem , is , she actually m said , she never s just spoke about entering , she just wanted to get someplace , and she said for buying . nuh ? so this is definitely interesting , and grad d: , and in the other case , where she wanted to look at the graffiti , also , not in the sentence " how do you get there ? " was pretty standard . except that there was a anaphora , for pointing at what she talked about before , and there she was talking about looking at pictures that are painted inside a wall on walls , so actually , you 'd need a lot of world knowledge . this would have been a classical " tango " , actually . , because graffiti is usually found on the outside and not on the inside , so the mistake would have make a mistake the system would have made a mistake here .
a test run of the data collection design was very successful. the group decided to hire the "wizard" and continue with the refinement of the design and recruitment of subjects. on the other hand , there was a presentation of a new version of the belief-net for the vista/enter/tango mode task. it is not a working net yet , but identifying clusters of features that define the output mode provides a visual aid for further work. there are potential problems from a combinatorics perspective. these can be tackled either with technical adjustments or through careful knowledge engineering. a base solution for the task would be to simply add some extra action-mode rules in the smartkom system. action modes , however , can be inferred more efficiently by feeding a collection of features -from the ontology , discourse history , parsing , etc.- into bayes-nets that would replace those rules. ideally , the results of this small task will give insights into the function of linguistic deep understanding. for instance , the final combination of features used in the current study may form a representation of the ontology , general enough to employ in any task that includes trajectors and paths. although the data collection test went well and it was decided to hire the "wizard" , there are minor amendments in the procedure to be carried out ( shortenting the preparatory reading , numbering tasks etc ). a presentation of the data collection design should be included in the forthcoming meeting with other research groups , along with some account of the system design and the use of belief-nets. the structure of the latter was a major issue during the meeting. their input vector is to be provided by information extracted from the various modules of the system , such as the ontology and discourse history , using standard rule-based methods. the output action essentially provides additional semantic parameters for the x-schema , and , in turn , may trigger the collection of more features from the data. the feature extraction could be carried out by a software tool checking object ( feature ) trees and filling them with appropriate values. for this concept to be put to work , further refinement of the belief-net variables is necessary , with particular attention on the combinatorics involved. after the trial run of the experiment , some minor issues , like the length of the preliminary reading and the need to order the tasks given to the subjects , were highlighted. it is also possible that the pool of subjects ends up comprising almost entirely of students; more variation in the sample is needed. from a system design perspective , the progress so far has shown that the combinatorics of the bayes-net even for a simple task , like the choice of vista/enter/tango mode , could render it unmanageable. the belief-net presented in the meeting is not a working bayes-net. consequently , it is as yet unclear what the decision nodes in the net are , and what values these can take. even if those were decided , how to extract the necessary information from the data , would still be an open issue. looking at the bigger picture , the current task is yet to provide insights into more general ways to achieve linguistic deep understanding. with these intricacies in mind , it is not easy to put together a presentation of the project cohesive and attractive enough for the other research groups in the institute. the first trial run of the experiments was very successful. after reading some information on a city , the subject , acting as a tourist , has to ask for information over a phoneline in order to carry out certain tasks. a "wizard" on the other side of the line pretends to be a computer system for half of the duration. for the second half the subject is aware of communicating with another human. in parallel to the data collection , there was further work on the belief-net for the inferences currently studied. the features determining the output mode ( vista , enter or tango ) of the belief-net have been grouped in categories: trajector , landmark , source , path , goal , parse , prosody , world knowledge , discourse and context. every particular feature , like "time of day" , "being in a hurry" , "business or tourism" , etc , will fit into on these categories. although the presented net is not a working bayes-net yet , it serves as a visual aid and stepping stone for the work to follow.
###dialogue: grad a: do we have to read them that slowly ? ok . sounded like a robot . , this is t grad a: three three six zero . four two zero one seven . that 's what of when of beat poetry . grad a: and he talks like that . that 's why i thi that probably is why of it that way . grad a: mike meyers is the guy . it - it 's his cute romantic comedy . that 's that 's that 's his cute romantic comedy , . the other thing that 's real funny , i 'll spoil it for you . is when he 's he works in a coffee shop , in san francisco , and he 's sitting there on this couch and they bring him this massive cup of espresso , and he 's like " excuse me i ordered the large espresso ? " grad a: do are y so you 're trying to decide who 's the best taster of tiramisu ? grad d: no ? . there was a fierce argument that broke out over whose tiramisu might be the best and so we decided to have a contest where those people who claim to make good tiramisu make them , and then we got a panel of impartial judges that will taste do a blind taste and then vote . should be fun . grad a: seems like you could put a s magic special ingredient in , so that everyone know which one was yours . then , if you were to bribe them , you could grad d: , i was thinking if y you guys have plans for sunday ? we 're we 're not it 's probably going to be this sunday , but we 're working with the weather here because we also want to combine it with some barbecue activity where we just fire it up and what whoever brings whatever , can throw it on there . so only the tiramisu is free , nothing else . grad a: , i ' m going back to visit my parents this weekend , so , i 'll be out of town . grad a: we are . is nancy s gon na show up ? mmm . wonder if these things ever emit a very , like , piercing screech right in your ear ? grad d: they are gon na get more comfortable headsets . they already ordered them . let 's get started . the should i go first , with the , data . can i have the remote control . so . on friday we had our wizard test data test and these are some of the results . this was the introduction . i actually , even though liz was kind enough to offer to be the first subject , i felt that she knew too much , so i asked litonya . just on the spur of the moment , and she was kind enough to serve as the first subject . grad d: so , this is what she saw as part of as for instr introduction , this is what she had to read aloud . , that was really difficult for her and grad d: the names and this was the first three tasks she had to master after she called the system , and then the system broke down , and those were the l i should say the system was supposed to break down and then these were the remaining three tasks that she was going to solve , with a human there are here are the results . mmm . and i will not we will skip the reading now . d . and . the reading was five minutes , exactly . and now comes the this is the phone - in phase of grad c: , can i have a question . so there 's no system , right ? like , there was a wizard for both parts , is this right ? grad d: it was bo it both times the same person . one time , pretending to be a system , one time , to pretending to be a human , which is actually not pretending . grad c: . is n't this obvious when it says " ok now you 're talking to a human " and then the human has the same voice ? grad d: no no . we u . ok , good question , but you just and see . it 's you 're gon na l learn . and the wizard sometimes will not be audible , because she was actually they there was some lapse in the wireless , we have to move her closer . grad a: is she mispronouncing " anlage " ? is it " anlaga " or " anlunga " grad d: they 're mispronouncing everything , but it 's this is the system breaking down , actually . did i call europe ? so , this is it . , if we grad d: there was a strange reflex . i have a headache . i ' m really out of it . ok , the lessons learned . the reading needs to be shorter . five minutes is just too long . , that was already anticipated by some people suggested that if we just have bullets here , they 're gon na not they 're subjects are probably not gon na going to follow the order . and she did not . professor b: s so if you just number them " one " , " two " , " three " it 's grad d: . we need to so that 's one thing . and we need a better introduction for the wizard . that is something that fey actually thought of a in the last second that sh the system should introduce itself , when it 's called . grad d: and , another suggestion , by liz , was that we , through subjects , switch the tasks . so when they have task - one with the computer , the next person should have task - one with a human , and . so we get data for that . , we have to refine the tasks more and more , which we have n't done , so far , in order to avoid this rephrasing , so where , even though w we do n't tell the person " ask blah - blah " they still try , or at least litonya tried to repeat as much of that text as possible . grad d: and my suggestion is we keep the wizard , because she did a wonderful job , grad d: in the sense that she responded quite nicely to things that were not asked for , how much is a t a bus ticket and a transfer so this is gon na happen all the time , we d you can never be . johno pointed out that we have maybe a grammatical gender problem there with wizard . grad a: i was n't whether wizard was the correct term for " not a man " . grad d: and so , some work needs to be done , but we can and this , and in case no you had n't seen it , this is what litonya looked at during the while taking the while partaking in the data collection . professor b: ok , great . so first of all , i agree that we should hire fey , and start paying her . probably pay for the time she 's put in as . , do exactly how to do that , or is lila , what exactly do we do to put her on the payroll in some way ? professor b: so why do n't you ask lila and see what she says about exactly what we do for someone in th professor b: she just graduated but anyway . so i if , i agree , she sounded fine , she a actually was , more , present and than she was in conversation , so she did a better job than i would have guessed from just talking to her . so that 's great . grad d: this is what i gave her , so this is h how to get to the student prison , and i did n't even spell it out here and in some cases i spelled it out a little bit more thoroughly , this is the information on the low sunken castle , and the amphitheater that never came up , and , so i if we give her even more , instruments to work with the results are gon na be even better . professor b: , and then as she does it she 'll learn @ . and also if she 's willing to take on the job of organizing all those subjects and that would be wonderful . and , she 's actually she 's going to graduate school in a an experimental paradigm , so this is all just fine in terms of h her learning things she 's gon na need to know , to do her career . so , i my is she 'll be r quite happy to take on that job . and , so grad d: she did n't explicitly state that so . and i told her that we gon na figure out a meeting time in the near future to refine the tasks and s look for the potential sources to find people . she also agrees that if it 's all just gon na be students the data is gon na be less valuable because of that professor b: , as i say there is this s set of people next door , it 's not hard to grad d: we 're already however , we may run into a problem with a reading task there . and , we 'll see . professor b: we could talk to the people who run it and see if they have a way that they could easily tell people that there 's a task , pays ten bucks , but you have to be comfortable reading relatively complicated . and and there 'll probably be self - selection to some extent . , so that 's good . now , i signed us up for the wednesday slot , and part of what we should do is this . so , my idea on that was , partly we 'll talk about system for the computer scientists , but partly i did want it to get the linguists involved in some of this issue about what the task is and all , what the dialogue is , and what 's going on linguistically , because to the extent that we can get them contributing , that will be good . so this issue about re - formulating things , maybe we can get some of the linguists sufficiently interested that they 'll help us with it , other linguists , if you 're a linguist , but in any case , the linguistics students and . so my idea on wednesday is partly to you , what you did today would i is just fine . you just do " this is what we did , and here 's the thing , and here 's some of the dialogue and . " but then , the other thing is we should give the computer scientists some idea of what 's going on with the system design , and where we think the belief - nets fit in and where the pieces are and like that . is is this make sense to everybody ? so , i do n't think it 's worth a lot of work , particularly on your part , to make a big presentation . i do n't think you should you do n't have to make any new powerpoint or anything . we got plenty of to talk about . and , then just see how a discussion goes . grad d: sounds good . the other two things is we ' ve can have johno tell us a little about this and we also have a l little bit on the interface , m - three - l enhancement , and then that was it , . grad a: so , what i did for this is , a pedagogical belief - net because i was i took i tried to conceptually do what you were talking about with the nodes that you could expand out so what i did was i took i made these dummy nodes called trajector - in and trajector - out that would isolate the things related to the trajector . and then there were the things with the source and the path and the goal . and i separated them out . and then i did similar things for our net to with the context and the discourse and whatnot , so we could isolate them or whatever in terms of the top layer . and then the bottom layer is just the mode . so . professor b: so , let 's , i do n't understand it . let 's go slide all the way up so we see what the p very bottom looks like , or is that it ? grad a: , there 's just one more node and it says " mode " which is the decision between the grad a: and i grouped things according to what how they would fit in to image schemas that would be related . and the two that i came up with were trajector - landmark and then source - path - goal as initial ones . and then i said , the trajector would be the person in this case probably . grad a: , we have the concept of what their intention was , whether they were trying to tour or do business or whatever , or they were hurried . that 's related to that . and then in terms of the source , the things the only things that we had on there i believe were whether actually , i might have added these cuz i do n't think we talked too much about the source in the old one but whether the where i ' m currently at is a landmark might have a bearing on whether or the " landmark - iness " of where i ' m currently at . and " usefulness " is basi means is that an institutional facility like a town hall like that 's not something that you 'd visit for tourist 's tourism 's sake or whatever . travel constraints would be something like , maybe they said they can they only wanna take a bus like that , right ? and then those are somewhat related to the path , so that would determine whether we 'd could take we would be telling them to go to the bus stop or versus walking there directly . , " goal " . similar things as the source except they also added whether the entity was closed and whether they have somehow marked that is was the final destination . , and then if you go up , robert , so , in terms of context , what we had currently said was whether they were a businessman or a tourist of some other person . , discourse was related to whether they had asked about open hours or whether they asked about where the entrance was or the admission fee , along those lines . , prosody i do n't really i ' m not really what prosody means , in this context , so made up whether what they say is or h how they say it is that . grad a: , the parse would be what verb they chose , and then maybe how they modified it , in the sense of whether they said " i need to get there quickly " or whatever . and , in terms of world knowledge , this would just be like opening and closing times of things , the time of day it is , and whatnot . grad a: tourbook ? that would be , i , the " landmark - iness " of things , whether it 's in the tourbook or not . professor b: ch - ch . now . alright , so i understand what 's what you got . i do n't yet understand how you would use it . so let me see if ask professor b: a s no , i understand that , but so , what let 's slide back up again and see start at the bottom and oop - bo - doop - boop . so , you could imagine w , go ahead , you were about to go up there and point to something . professor b: , ok . so , so if you if we made if we wanted to make it into a real bayes - net , that is , with fill , actually f , fill it @ in , then grad a: so we 'd have to get rid of this and connect these things directly to the mode . professor b: , here 's the problem . and and bhaskara and i was talking about this a little earlier today is , if we just do this , we could wind up with a huge , combinatoric input to the mode thing . and grad a: i , i unders i understand that , it 's hard for me to imagine how he could get around that . professor b: , i but that 's what we have to do . ok , so , there there are a variety of ways of doing it . let me just mention something that i do n't want to pursue today which is there are technical ways of doing it , i slipped a paper to bhaskara and about noisy - or 's and noisy - maxes there 're ways to back off on the purity of your bayes - net - edness . , so . if you co you could i m a and i now i that any of those actually apply in this case , but there is some technology you could try to apply . grad a: so it 's possible that we could do something like a summary node of some sort that grad a: so in that case , the sum we 'd have we , these would n't be the summary nodes . we 'd have the summary nodes like where the things were i maybe if thi if things were related to business or some other professor b: so what i was gon na say is maybe a good at this point is to try to informally , not necessarily in th in this meeting , but to try to informally think about what the decision variables are . so , if you have some bottom line decision about which mode , what are the most relevant things . and the other trick , which is not a technical trick , it 's a knowledge engineering trick , is to make the n each node sufficiently narrow that you do n't get this combinatorics . so that if you decided that you could characterize the decision as a trade - off between three factors , whatever they may be , ok ? then you could say " aha , let 's have these three factors " , and maybe a binary version f for each , or some relatively compact decision node just above the final one . and then the question would be if those are the things that you care about , can you make a relatively compact way of getting from the various inputs to the things you care about . so that y so that , you can try to do a knowledge engineering thing given that we 're not gon na screw with the technology and just always use orthodox bayes - nets , then we have a knowledge engineering little problem of how do we do that . grad a: so what i need to do is to take this one and the old one and merge them together ? so that professor b: , mmm , something . , so , robert has thought about this problem f for a long time , cuz he 's had these examples kicking around , so he may have some good intuition about , what are the crucial things . and , i understand where this the this is a way of playing with this abs source - path - goal trajector exp abstraction and sh displaying it in a particular way . , i do n't think our friends on wednesday are going to be able to , maybe they will . , let me think about whether we can present this to them or not . , grad d: , this is still , ad - hoc . this is th the second version and i look at this maybe just as a , a whatever , uml diagram or , as just a screen shot , not really as a bayes - net as john johno said . grad a: we could actually , y draw it in a different way , in the sense that it would make it more abstract . grad d: but the that , it just is a visual aid for thinking about these things which has comple clearly have to be specified m more carefully professor b: alright , le let me think about this some more , and see if we can find a way to present this to this linguists group that is helpful to them . grad d: , ultimately we may w we regard this as an exercise in thinking about the problem and maybe a first version of a module , if you wanna call it that , that you can ask , that you can give input and it 'll throw the dice for you , throw the die for you , because i integrated this into the existing smartkom system in the same way as much the same way we can have this thing . close this down . so if this is what m - three - l will look like and what it 'll give us , and a very simple thing . we have an action that he wants to go from somewhere , which is some type of object , to someplace . and this these this changed now only , it 's doing it twice now because it already did it once . , we 'll add some action type , which in this case is " approach " and could be , more refined in many ways . grad d: or we can have something where the goal is a public place and it will give us then an action type of the type " enter " . so this is just based on this one , on this one feature , and that 's about all you can do . and so in the f if this pla if the object type here is a m is a landmark , it 'll be " vista " . and this is about as much as we can do if we do n't w if we want to avoid a huge combinatorial explosion where we specify " ok , if it 's this and this but that is not the case " , and , it just gets really messy . professor b: it was much too quick for me . ok , so let me see if i understand what you 're saying . so , i do understand that you can take the m - three - l and add not and it w and you need to do this , for , we have to add , not too much about object types and , and what you did is add some rules of the style that are already there that say " if it 's of type " landmark " , then you take you 're gon na take a picture of it . " professor b: f full stop , that 's what you do . ev - every landmark you take a picture of , grad d: every public place you enter , and statue you want to go as near as possible . professor b: you enter you approach . , and certainly you can add rules like that to the existing smartkom system . and you just did , right ? professor b: , that 's a that 's another baseline case , that 's another thing " ok , here 's a another minimal way of tackling this " . add extra properties , a deterministic rule for every property you have an action , " pppt ! " you do that . , then the question would be now , if that 's all you 're doing , then you can get the types from the ontology , because that 's all you 're using is this type the types in the ontology and you 're done . right ? so we do n't use the discourse , we do n't use the context , we do n't do any of those things . alright , but that 's ok , and it 's again a one minimal extension of the existing things . and that 's something the smartkom people themselves would they 'd say " , that 's no problem , no problem to add types to the ont " grad d: this is just in order to exemplify what we can do very , very easily is , we have this silly interface and we have the rules that are as banal as of we just saw , and we have our content . now , the content i whi which is what we see here , which is the vista , schema , source , path , goal , whatever . this will be a job to find ways of writing down image schema , x - schema , constructions , in some form , and have this be in a in the content , loosely called " constructicon " . and the rules we want to throw away completely . and and here is exactly where what 's gon na be replaced with our bayes - net , which is exactly getting the input feeding into here . this decides whether it 's an whether action the enter , the vista , or the whatever professor b: that 's what you said , that 's fine . , but but it 's not construction there , it 's action . construction is a d is a different story . grad a: right . this is so what we 'd be generating would be a reference to a semantic like parameters for the x - schema ? professor b: for for yes . so that i if you had the generalized " go " x - schema and you wanted to specialize it to these three ones , then you would have to supply the parameters . and then , although we have n't worried about this yet , you might wanna worry about something that would go to the gis and use that to actually get , detailed route planning . so , where do you do take a picture of it and like that . professor b: but that 's not it 's not the immediate problem . but , presumably that functionality 's there when we professor b: , so the pro the immediate problem is back t to what you were what you are doing with the belief - net . , what are we going to use to make this decision grad a: right and then , once we ' ve made the decision , how do we put that into the content ? professor b: , that actually is relatively easy in this case . the harder problem is we decide what we want to use , how are we gon na get it ? and that the that 's the hardest problem . so , the hardest problem is how are you going to get this information from some combination of the what the person says and the context and the ontology . the h so , that 's the hardest problem at the moment is where are you gon na how are you gon na g get this information . and that 's so , getting back to here , we have a d a technical problem with the belief - nets that we do n't want all the com professor b: too many factors if we allow them to just go combinatorially . so we wanna think about which ones we really care about and what they really most depend on , and can we c , clean this up to the point where it grad a: so what we really wanna do i cuz this is really just the three layer net , we wanna b make it expand it out into more layers ? professor b: we might . , that 's certainly one thing we can do . , it 's true that the way you have this , a lot of the times you have what you 're having is the values rather than the variable . grad a: so instead of in instead it should really be just be " intention " as a node instead of " intention business " or " intention tour " . professor b: so you right , and then it would have values , " tour " , " business " , or " hurried " . but then but i it still some knowledge design to do , about i how do you wanna break this up , what really matters . , it 's fine . , we have to it 's iterative . we 're gon na have to work with it some . grad a: what was going through my mind when i did it was someone could both have a business intention and a touring intention and the probabilities of both of them happening at the same time professor b: , you could do that . and it 's perfectly ok to insist that , th , they add up to one , but that there 's that it does n't have to be one zero . so you could have the conditional p the each of these things is gon na be a probability . so whenever there 's a choice , so like landmark - ness and usefulness , professor b: right . and so that you might want to then have those b th - then they may have to be separate . they may not be able to be values of the same variable . professor b: so that 's but again , this is the knowledge design you have to go through . it 's , it 's great is , as one step toward where we wanna go . grad d: also it strikes me that we m may want to approach the point where we can try to find a , a specification for some interface , here that takes the normal m - three - l , looks at it . then we discussed in our pre - edu edu meeting how to ask the ontology , what to ask the ontology the fact that we can pretend we have one , make a dummy until we get the real one , and so we may wanna decide we can do this from here , but we also could do it if we have a belief - net interface . so the belief - net takes as input , a vector , right ? of . and it output is whatever , as . but this information is just m - three - l , and then we want to look up some more in the ontology and we want to look up some more in the maybe we want to ask the real world , maybe you want to look something up in the grs , but also we definitely want to look up in the dialogue history some s some . based on we have i was just made some examples from the ontology and so we have some information there that the town hall is both a building and it has doors and like this , but it is also an institution , so it has a mayor and we get relations out of it and once we have them , we can use that information to look in the dialogue history , were any of these things that are part of the town hall as an institution mentioned ? , were any of these that make the town hall a building mentioned ? , grad d: and , and maybe draw some inferences on that . so this may be a process of two to three steps before we get our vector , that we feed into the belief - net , professor b: there will be rules , but they are n't rules that come to final decisions , they 're rules that gather information for a decision process . no that 's just fine . , . so they 'll they presumably there 'll be a thread or process that agent , " agent " , whatever you wan wanna say , that is rule - driven , and can do things like that . and there 's an issue about whether there will be that 'll be the same agent and the one that then goes off and carries out the decision , so it probably will . my is it 'll be the same basic agent that can go off and get information , run it through a c this belief - net that turn a crank in the belief - net , that 'll come out with s more another vector , which can then be applied at what we would call the simulation or action end . so you now you 're gon na do and that may actually involve getting more information . so on once you pull that out , it could be that says " ! now that we know that we gon na go ask the ontology something else . " now that we know that it 's a bus trip , we did n't we did n't need to know beforehand , how long the bus trip takes or whatever , but now that we know that 's the way it 's coming out then we got ta go find out more . so that 's ok . grad d: so this is actually , s if we were to build something that is , and , i had one more thing , the it needs to do we come up with a code for a module that we call the " cognitive dispatcher " , which does nothing , but it looks of complect object trees and decides how are there parts missing that need to be filled out , there 's this is maybe something that this module can do , something that this module can do and then collect sub - objects and then recombine them and put them together . so maybe this is actually some useful tool that we can use to rewrite it , and get this part , then . professor b: i confess , i ' m still not completely comfortable with the overall story . i i this this is not a complaint , this is a promise to do more work . so i ' m gon na hafta think about it some more . in particular see what we 'd like to do , and this has been implicit in the discussion , is to do this in such a way that you get a lot of re - use . so . what you 're trying to get out of this deep co cognitive linguistics is the fact that w if about source , paths and goals , and nnn all this , that a lot of this is the same , for different tasks . and that there 's some important generalities that you 're getting , so that you do n't take each and every one of these tasks and hafta re - do it . and i do n't yet see how that goes . professor b: u what are the primitives , and how do you break this so i y i ' m just there saying eee you i know how to do any individual case , but i do n't yet see what 's the really interesting question is can you use deep cognitive linguistics to get powerful generalizations . grad d: maybe we sho should we a add then the " what 's this ? " domain ? , we have to how do i get to x . then we also have the " what 's this ? " domain , where we get some slightly different grad d: johno , actually , does not allow us to call them " intentions " anymore . so he dislikes the term . professor b: , i do n't like the term either , so i have n i y w i it grad d: but , i ' m the " what 's this ? " questions also create some interesting x - schema aspects . professor b: could be . i ' m not a i ' m not op particularly opposed to adding that or any other task , eventually we 're gon na want a whole range of them . professor b: i ' m just saying that i ' m gon na hafta do some first principles thinking about this . at the moment . h no . , no the bayes - nets the bayes - nets will be dec specific for each decision . but what i 'd like to be able to do is to have the way that you extract properties , that will go into different bayes - nets , be the general . so that if you have sources , you have trajectors and like that , and there 's a language for talking about trajectors , you should n't have to do that differently for going to something , than for circling it , for telling someone else how to go there , professor b: whatever it is . so that , the decision processes are gon na be different what you 'd really like is the same thing you 'd always like which is that you have a intermediate representation which looks the same o over a bunch of inputs and a bunch of outputs . so all sorts of different tasks and all sorts of different ways of expressing them use a lot of the same mechanism for pulling out what are the fundamental things going on . and that 's that would be the really pretty result . and pushing it one step further , when you get to construction grammar and , what you 'd like to be able to do is say you have this parser which is much fancier than the parser that comes with smartkom , i that actually uses constructions and is able to tell from this construction that there 's something about the intent , the actual what people wanna do or what they 're referring to and , in independent of whether it about what is this or where is it , that you could tell from the construction , you could pull out deep semantic information which you 're gon na use in a general way . so that 's the you might . you might be able to say that this i this is the construction in which the there 's let 's say there 's a cont there the land the construction implies the there 's a con this thing is being viewed as a container . so just from this local construction that you 're gon na hafta treat it as a container you might as go off and get that information . and that may effect the way you process everything else . so if you say " how do i get into the castle " then or , " what is there in the castle " or so there 's all sorts of things you might ask that involve the castle as a container and you 'd like to have this orthogonal so that anytime the castle 's referred to as a container , you crank up the appropriate . independent of what the goal is , and independent of what the surrounding language is . alright , so that 's the thesis level grad d: it 's unfortunate also that english has got rid of most of its spatial adverbs because they 're really fancy then , in for these kinds of analysis . but . grad d: , but they 're easier for parsers . parsers can pick those up but the with the spatial adverbs , they have a tough time . because the mean the semantics are very complex in that . ok , ? i had one more thing . i do n't remember . forgot it again . , b but an architecture like this would also enable us maybe to throw this away and replace it with something else , or whatever , so that we have so that this is the representational formats we 're talking about that are independent of the problem , that generalize over those problems , and are , t of a higher quality than an any actual whatever belief - net , or " x " that we may use for the decision making , ultimately . should be decoupled , . professor b: so , are we gon na be meeting here from now on ? i ' m happy to do that . we we had talked about it , cuz you have th the display and everything , that seems fine . grad d: , liz also asks whether we 're gon na have presentations every time . i do n't think we will need to do that but it 's so far it was as a visual aid for some things and professor b: no it 's worth it to ass to meet here to bring this , and assume that something may come up that we wanna look at . why not . grad d: the , she w she was definitely good in the sense that she showed us some of the weaknesses and also the fact that she was a real subject , is grad d: so that , w looking just looking at this data , listening to it , what can we get out of it in terms of our problem , is , she actually m said , she never s just spoke about entering , she just wanted to get someplace , and she said for buying . nuh ? so this is definitely interesting , and grad d: , and in the other case , where she wanted to look at the graffiti , also , not in the sentence " how do you get there ? " was pretty standard . except that there was a anaphora , for pointing at what she talked about before , and there she was talking about looking at pictures that are painted inside a wall on walls , so actually , you 'd need a lot of world knowledge . this would have been a classical " tango " , actually . , because graffiti is usually found on the outside and not on the inside , so the mistake would have make a mistake the system would have made a mistake here . ###summary: a test run of the data collection design was very successful. the group decided to hire the "wizard" and continue with the refinement of the design and recruitment of subjects. on the other hand , there was a presentation of a new version of the belief-net for the vista/enter/tango mode task. it is not a working net yet , but identifying clusters of features that define the output mode provides a visual aid for further work. there are potential problems from a combinatorics perspective. these can be tackled either with technical adjustments or through careful knowledge engineering. a base solution for the task would be to simply add some extra action-mode rules in the smartkom system. action modes , however , can be inferred more efficiently by feeding a collection of features -from the ontology , discourse history , parsing , etc.- into bayes-nets that would replace those rules. ideally , the results of this small task will give insights into the function of linguistic deep understanding. for instance , the final combination of features used in the current study may form a representation of the ontology , general enough to employ in any task that includes trajectors and paths. although the data collection test went well and it was decided to hire the "wizard" , there are minor amendments in the procedure to be carried out ( shortenting the preparatory reading , numbering tasks etc ). a presentation of the data collection design should be included in the forthcoming meeting with other research groups , along with some account of the system design and the use of belief-nets. the structure of the latter was a major issue during the meeting. their input vector is to be provided by information extracted from the various modules of the system , such as the ontology and discourse history , using standard rule-based methods. the output action essentially provides additional semantic parameters for the x-schema , and , in turn , may trigger the collection of more features from the data. the feature extraction could be carried out by a software tool checking object ( feature ) trees and filling them with appropriate values. for this concept to be put to work , further refinement of the belief-net variables is necessary , with particular attention on the combinatorics involved. after the trial run of the experiment , some minor issues , like the length of the preliminary reading and the need to order the tasks given to the subjects , were highlighted. it is also possible that the pool of subjects ends up comprising almost entirely of students; more variation in the sample is needed. from a system design perspective , the progress so far has shown that the combinatorics of the bayes-net even for a simple task , like the choice of vista/enter/tango mode , could render it unmanageable. the belief-net presented in the meeting is not a working bayes-net. consequently , it is as yet unclear what the decision nodes in the net are , and what values these can take. even if those were decided , how to extract the necessary information from the data , would still be an open issue. looking at the bigger picture , the current task is yet to provide insights into more general ways to achieve linguistic deep understanding. with these intricacies in mind , it is not easy to put together a presentation of the project cohesive and attractive enough for the other research groups in the institute. the first trial run of the experiments was very successful. after reading some information on a city , the subject , acting as a tourist , has to ask for information over a phoneline in order to carry out certain tasks. a "wizard" on the other side of the line pretends to be a computer system for half of the duration. for the second half the subject is aware of communicating with another human. in parallel to the data collection , there was further work on the belief-net for the inferences currently studied. the features determining the output mode ( vista , enter or tango ) of the belief-net have been grouped in categories: trajector , landmark , source , path , goal , parse , prosody , world knowledge , discourse and context. every particular feature , like "time of day" , "being in a hurry" , "business or tourism" , etc , will fit into on these categories. although the presented net is not a working bayes-net yet , it serves as a visual aid and stepping stone for the work to follow.
0
grad f: and , somewhere is my agenda . the most important thing is morgan wanted to talk about , the arpa demo . professor d: , so , here 's the thing . , why do n't we s again start off with , , i 'll get it . i 'll get the door . , we want to start off with the agenda . and then , given that , liz and andreas are gon na be ten , fifteen minutes late , we can try to figure out what we can do most effectively without them here . so so , one thing is , talk about demo , grad f: , we wanna talk about if w if we wanna add the data to the mar meeting recorder corpus . grad f: so why do n't we have that on the agenda and we 'll get to it and talk about it ? grad f: - . absinthe , which is the multiprocessor unix linux . it was andreas wanted to talk about segmentation and recognition , and update on sri recognition experiments . and then if ti if there 's time i wanted to talk about digits , but it looked like we were pretty full , so till next week . professor d: right . ok . , let 's see . the a certainly the segmentation and recognition we wanna maybe focus on when an - andreas is here since that was particularly his thing . professor d: smartkom also , andreas . absinthe , also he has been involved in a lot of those things . professor d: so , they 'll be inter i 'll be interested in all this , but , probably , if we had to pick something that we would talk on for ten minutes or so while they 're coming here . or i it would be , you think , reorganization status , or ? grad f: , chuck was the one who added out the agenda item . i do n't really have anything to say other than that we still have n't done it . phd b: maybe i said maybe we said this before just that we met and we talked about it and we have a plan for getting things organized and postdoc a: and i and a crucial part of that is the idea of not wanting to do it until right before the next level zero back - up so that there wo n't be huge number of added , grad f: although dave said that if we wanna do it , just tell him and he 'll do a d level zero then . grad f: so , we do need to talk a little bit about , we do n't need to do it during this meeting . we have a little more to discuss . but , we 're ready to do it . and , i have some web pages on ts more of the background . so , naming conventions and things like that , that i ' ve been trying to keep actually up to date . and i ' ve been sharing them with u - d uw folks also . postdoc a: i ' m , you ' ve been what ? showing them ? sharing them . professor d: , maybe , since that was a pretty short one , maybe we should talk about the ibm transcription status . someone can fill in liz and andreas later . grad f: so , we did another version of the beeps , where we separated each beeps with a spoken digit . chuck came up here and recorded some di himself speaking some digits , and so it just goes " beep one beep " and then the phrase , and then " beep two beep " and then the phrase . and that seems pretty good . , they 'll have a b easier time keeping track of where they are in the file . grad f: and we did it with the automatic segmentation , and i do n't think we ne we did n't look at it in detail . we just sent it to ibm . we we sorta spot - checked it . grad f: i sorta spot - checked here and there and it sounded pretty good . so . it 'll work . and , we 'll just hafta see what we get back from them . phd b: and the main thing will be if we can align what they give us with what we sent them . , that 's the crucial part . and we 'll be able to do that at with this new beep format . grad f: , so the problem wi last time is that there were errors in the transcripts where they put beeps where there were n't any , or and they put in extraneous beeps . and with the numbers there , it 's much less likely . phd b: , one interesting note is , or problem if this was just because of how i play it back , i say , snd - play and then the file , every once in a while , @ , like a beep sounds like it 's cut into two beeps . phd b: , and if that 's an , artifact of playback bu , i do n't think it 's probably in the original file . , but , phd b: but with this new format , that hopefully they 're not hearing that , and if they are , it should n't throw them . so . grad f: , maybe we better listen to it again , make , but , certainly the software should n't do that , phd b: i it 's probably just , mmm , somehow the audio device gets hung for a second , postdoc a: as long as they have one number , and they know that there 's only one beep maximum that goes with that number . grad f: the only the only part that might be confusing is when chuck is reading digits . grad f: yes . because , we do n't we did n't in order to cut them out we 'd have to listen to it . grad f: and we wanted to avoid doing that , so we they are transcribing the digits . grad f: although we could tell them , if you hear someone reading a digits string just say " bracket digit bracket " and do n't bother actually computing the di writing down the digits . postdoc a: that 'd be great . that 'd be what i ' m having the transcribers here do , cuz it can be extracted later . grad f: and then i wanted to talk about but as i said i we may not have time what we should do about digits . we have a whole pile of digits that have n't been transcribed . professor d: le - let 's talk about it , because that 's something that i know andreas is less interested in than liz is , so , . it 's good phd b: , brian i sent bresset sent brian a message about the meeting and i have n't heard back yet . so . i g hope he got it and hopefully he 's maybe he 's gone , he did n't even reply to my message . so . i should probably ping him just to make that he got it . grad f: alright . so , we have a whole bunch of digits , if we wanna move on to digits . professor d: actually , maybe i one one relate more related thing in transcription . so that 's the ibm . we ' ve got that sorted out . , how 're we doing on the rest of it ? postdoc a: we 're doing . i hire i ' ve hired two extra people already , expect to hire two more . and , i ' ve prepared , , a set of five which i ' m calling set two , which are now being edited by my head transcriber , in terms of spelling errors and all that . she 's also checking through and mar and monitoring , the transcription of another transcriber . , she 's going through and doing these kinds of checks . postdoc a: and , i ' ve moved on now to what i ' m calling set three . if i do it in sets groups of five , then have , like , a parallel processing through the current . and and you indicated to me that we have a g a goal now , for the , { nonvocalsound } the , darpa demo , of twenty hours . so , i ' m gon na go up to twenty hours , be that everything gets processed , and released , and that 's what my goal is . package of twenty hours right now , and then once that 's done , move on to the next . professor d: , so twenty hours . but i the other thing is that , that 's kinda twenty hours asap because the longer before the demo we actually have the twenty hours , the more time it 'll be for people to actually do things with it . postdoc a: they would like to do it full - time , several of these people . and and i do n't think it 's possible , really , to do this full - time , but , that what it shows is motivation to do as many hours as possible . professor d: , i the so the difference if , if the ibm works out , the difference in the job would be that they p primarily would be checking through things that were already done by someone else ? grad f: correcting . we 'll we 'll expect that they 'll have to move some time bins and do some corrections . postdoc a: and i , i ' ve also d , discovered so with the new transcriber i ' m so , lemme say that my , so , at present , the people have been doing these transcriptions a channel at a time . and , that , is useful , and t , and then once in a while they 'll have to refer to the other channels to clear something up . , i realize that , w i we 're using the pre - segmented version , and , the pre - segmented version is extremely useful , and would n't it be , useful also to have the visual representation of those segments ? and so i ' ve , i ' ve trained the new one , the new the newest one , to , use the visual from the channel that is gon na be transcribed at any given time . and that 's just amazingly helpful . because what happens then , is you scan across the signal and once in a while you 'll find a blip that did n't show up in the pre - segmentation . postdoc a: once in a while it 's a backchannel . sometimes it seems to be , similar to the ones that are being picked up . and they 're rare events , but you can really go through a meeting very quickly . you just you just , yo you s you scroll from screen to screen , looking for blips . and , that we 're gon na end up with , better coverage of the backchannels , but at the same time we 're benefitting tremendously from the pre - segmentation because there are huge places where there is just no activity . and , the audio quality is so good postdoc a: so that 's gon na , also , speed the efficiency of this part of the process . professor d: , . so , so let 's talk about the digits , since they 're not here yet . grad f: , so , we have a whole bunch of digits that we ' ve read and we have the forms and so on , but only a small number of that ha , not a small number only a subset of that has been transcribed . and so we need to decide what we wanna do . and , liz and andreas actually they 're not here , but , they did say at one point that they thought they could do a pretty good job of just doing a forced alignment . and , again , i do n't think we 'll be able to do with that alone , because , sometimes people correct themselves and things like that . but so , i was just wondering what people thought about how automated can we make the process of finding where the people read the digits , doing a forced alignment , and doing the timing . phd b: you 're talking about as a pre - processing step . right , morgan ? is that what you 're ? professor d: , i ' m not quite what i ' m talking about . i , we 're talking about digits now . and and so , there 's a bunch of that has n't been marked yet . and , there 's the issue that they we was said , but do we ? professor d: because people make mistakes and . i was just asking , just out of curiosity , if with , the sri recognizer getting one percent word error , would we do better ? so , if you do a forced alignment but the force but the transcription you have is wrong because they actually made mistakes , or false starts , it 's much less c it 's much less common than one percent ? professor d: right ? so that 's just my question . i ' m not saying it should be one way or the other , but it 's if grad f: but , there 're a couple different of doing it . we could use the tools i ' ve already developed and transcribe it . hire some people , or use the transcribers to do it . we could let ibm transcribe it . , they 're doing it anyway , and unless we tell them different , they 're gon na transcribe it . , or we could try some automated methods . and my tendency right now is , if ibm comes back with this meeting and the transcript is good , just let them do it . professor d: , it 's y you raised a point , , euphemistically but , m maybe it is a serious problem . ho - what will they do when they go hear " beep seven three five two " , you think they 'll we 'll get ? postdoc a: it , it 'd be preceded by " i ' m reading transcript so - and - so " ? so , if they 're processing it at grad f: , it 'll be it will be in the midst of a digit string . so it , there might be a place where it 's " beep seven beep eight beep " . but , they 're gon na macros for inserting the beep marks . and so , i do n't think it 'll be a problem . we 'll have to see , but i do n't think it 's gon na be a problem . professor d: , i , that 's if they are going to transcribe these things , certainly any process that we 'd have to correct them , or whatever is needs to be much less elaborate for digits than for other . so , why not ? that was it ? grad f: that was it . just , what do we do with digits ? we have so many of them , and it 'd be to actually do something with them . professor d: i in berkeley , . so , you you have to go a little early , right ? at twenty professor d: alright . so le let 's make we do the ones that , saved you . professor d: so there was some in in adam 's agenda list , he had something from you about segmentation this last recognition ? phd i: , . so this is just partly to inform everybody , and to get , input . , so , { nonvocalsound } , we had a discussion don and liz and i had discussion last week about how to proceed with , , with don 's work , phd i: and , one of the obvious things that occur to us was that we 're since we now have thilo 's segmenter and it works , amazingly , we should actually re - evaluate the recognition , results using , without cheating on the segmentations . phd e: and how do we find the transcripts for those so that ? the references for those segments ? phd i: no , actually , nist has , m a fairly sophisticated scoring program that you can give a , a time , phd i: , you just give two time - marked sequences of words , and it computes the , the th phd i: it does all the work for you . so , it we just and we use that actually in hub - five to do the scoring . so what we ' ve been using so far was a simplified version of the scoring . and we can handle the type of problem we have here . phd i: it does time - constrained word - alignment . so that should be possible . that should n't be a problem . , so that was the one thing , and the other was that , what was the other problem ? ! that thilo wanted to use the recognizer alignments to train up his , speech detector . , so that we could use , there would n't be so much hand labelling needed to , to generate training data for the speech detector . phd e: so , it 's s , eight meetings which i ' m using , and , it 's before it was twenty minutes of one meeting . so should be a little bit better . phd g: actually , i had a question about that . if you find that you can lower the false alarms that you get where there 's no speech , that would be useful for us to know . so , phd g: so , r right now you get f fal , false , speech regions when it 's just like , breath like that , and i 'd be interested to know the wha if you retrain , do those actually go down or not ? because of phd e: i 'll can make an can , like , make a c comparison of the old system to the new one , and then phd g: , just to see if by doing nothing in the modeling of just having that training data wh what happens . professor d: another one that we had on adam 's agenda that definitely involved you was s something about smartkom ? grad f: so , rob porzel , porzel ? and the , porzel and the , smartkom group are collecting some dialogues . grad f: they have one person sitting in here , looking at a picture , and a wizard sitting in another room somewhere . and , they 're doing a travel task . and , it involves starting i believe starting with a it 's it 's always the wizard , but it starts where the wizard is pretending to be a computer and it goes through a , speech generation system . grad f: synthesis system . , and then , it goes to a real wizard and they 're evaluating that . and they wanted to use this equipment , and so the w question came up , is , here 's some more data . should this be part of the corpus or not ? and my attitude was yes , because there might be people who are using this corpus for acoustics , as opposed to just for language . , or also for dialogue of various sorts . , so it 's not a meeting . right ? because it 's two people and they 're not face to face . professor d: a minute . so , wanted to understand it , cuz i ' m , had n't quite followed this process . so , it 's wizard in the sen usual sense that the person who is asking the questions does n't know that it 's , a machi not a machine ? phd i: actually actually , w the we do this who came up with it , but it 's a really clever idea . we simulate a computer breakdown halfway through the session , and so then after that , the person 's told that they 're now talking to a , to a human . phd i: so , we collect both human - computer and human - human data , essentially , in the same session . professor d: you might wanna try collecting it the other way around sometime , saying that th the computer is n't up yet and then so then you can separate it out whether it 's the beginning or end effects . grad f: i have to go now . bmr024fdialogueact569 11878 118893 f grad s^j -1 0 you can talk to the computer . phd b: if you tell them that they 're the computer part is running on a windows machine . and the whole breakdown thing kinda makes sense . grad f: and i said , " that 's silly , if we 're gon na try to do it for a corpus , there might be people who are interested in acoustics . " phd i: no , the question is do we save one or two far - field channels or all of them ? grad f: i see no reason not to do all of them . that that if we have someone who is doing acoustic studies , it 's to have the same for every recording . professor d: so , what is the purpose of this recording ? this is to get acoustic and language model training data for smartkom ok . phd i: it 's to be traini to b training data and development data for the smartkom system . phd e: but but i ' m not about the legal aspect of that . is is there some contract with smartkom about the data ? what they or , is that our data which we are collecting here , professor d: we ' ve never signed anything that said that we could n't use anything that we did . professor d: no that 's not a problem . i l look , it seems to me that if we 're doing it anyway and we 're doing it for these purposes that we have , and we have these distant mikes , we definitely should re should save it all as long as we ' ve got disk space , and disk is pretty cheap . professor d: now th so we save it because it 's potentially useful . and now , what do we do with it is a s separate question . , anybody who 's training something up could choose to put it , to u include this or not . i would not say it was part of the meetings corpus . it is n't . but it 's some other data we have , and if somebody doing experiment wants to train up including that then they can . grad f: so it 's it it i it the begs the question of what is the meeting corpus . so if , at uw they start recording two - person hallway conversations is that part of the meeting corpus ? professor d: it 's i th think the idea of two or more people conversing with one another is key . phd g: what if we just give it a name like we give these meetings a name ? phd g: and then later on some people will consider it a meeting and some people wo n't , grad f: that was my intention . so so s so part of the reason that i wanted to bring this up is , do we wanna handle it as a special case or do we wanna fold it in , grad f: we give everyone who 's involved as their own user id , give it session i ds , let all the tools that handle meeting recorder handle it , or do we wanna special case it ? and if we were gon na special case it , who 's gon na do that ? phd i: , it makes sense to handle it with the same infrastructure , since we do n't want to duplicate things unnecessarily . phd i: but as far as distributing it , we should n't label it as part of this meeting corpus . we should let it be its own corp postdoc a: i ha i have an extra point , which is the naturalness issue . because we have , like , meetings that have a reason . that 's one of the reasons that we were talking about this . and and those and this sounds like it 's more of an experimental setup . it 's got a different purpose . professor d: it 's scenario - based , it 's human - computer interface it 's really pretty different . but i have no problem with somebody folding it in for some experiment they 're gon na do , but i do n't it does n't match anything that we ' ve described about meetings . whereas everything that we talked about them doing at uw and really does . they 're actually talking grad f: so w so what does that mean for how we are gon na organize things ? professor d: you can you can again , as andreas was saying , if you wanna use the same tools and the same conventions , there 's no problem with that . it 's just that it 's , different directory , it 's called something different , it 's it is different . you ca n't just fold it in as if it 's , digits are different , too . grad f: and it 's just you just mark the transcripts differently . so so one option is you fold it in , and just simply in the file you mark somewhere that this is this type of interaction , rather than another type of interaction . professor d: , i don i would n't call reading digits " meetings " . , we were doing grad f: , but , i put it under the same directory tree . , it 's in " user doctor speech data mr " . grad f: my preference is to have a single procedure so that i do n't have to think too much about things . professor d: o - you you can use whatever procedure you want that 's p convenient for you . grad f: if we do it any other way that means that we need a separate procedure , and someone has to do that . professor d: all i ' m saying is that there 's no way that we 're gon na tell people that reading digits is meetings . and similarly we 're not gon na tell them that someone talking to a computer to get travel information is meetings . those are n't meetings . but if it makes it easier for you to pu fold them in the same procedures and have them under the same directory tree , knock yourself out . phd b: there 's a couple other questions that i have too , and one of them is , what about , consent issues ? and the other one is , what about transcription ? are ? phd i: that 's a that 's another argument to keep it separate , because it 's gon na follow the smartkom transcription conventions and not the icsi meeting transcription conventions . professor d: but i ' m no one would have a problem with our folding it in for some acoustic modeling or some things . do we h do we have , , american - born folk , reading german , pla , place names and ? is that ? grad f: disk might eventually be an issue so we might need to , get some more disk pretty soon . grad f: we 're probably a little more than that because we 're using up some space that we should n't be on . so , once everything gets converted over to the disks we 're supposed to be using we 'll be probably , seventy - five percent . phd b: , when i was looking for space for thilo , i found one disk that had , it was nine gigs and another one had seventeen . and everything else was sorta committed . grad f: i ' m much more concerned about the backed - up . the non - backed - up , grad f: i is cheap . , if we need to we can buy a disk , hang it off a s , workstation . if it 's not backed - up the sysadmins do n't care too much . professor d: so , anytime we need a disk , we can get it at the rate that we 're phd i: you can i should n't be saying this , but , you can just , since the back - ups are every night , you can recycle the backed - up diskspace . professor d: da - we had allowed dave to listen to these , recordings . i me and there 's been this conversation going on about getting another file server , and we can do that . we 'll take the opportunity and get another big raft of disk , i . phd i: , i think there 's an argument for having , you could use our old file server for disks that have data that is very rarely accessed , and then have a fast new file server for data that is , heavily accessed . grad f: my understanding is , the issue is n't really the file server . we could always put more disks on . phd b: i think the file server could become an issue as we get a whole bunch more new compute machines . phd b: and we ' ve got , fifty machines trying to access data off of abbott at once . phd i: , i think we ' ve raised this before and someone said this is not a reliable way to do it , but the what about putting the on , like , c - cd - rom or dvd ? grad f: that was me . i was the one who said it was not reliable . the - they wear out . the the th phd i: but they wear out just from sitting on the shelf ? or from being read and read ? grad f: no . read and write do n't hurt them too much unless you scratch them . but the r the write once , and the read - writes , do n't last . so you do n't wa you do n't wanna put ir un reproduceable data on them . phd i: but if that then you would think you 'd hear much more clamoring about data loss and grad f: they 're on cd , but they 're not tha that 's not the only source . grad f: they have them on disk . and they burn new ones every once in a while . but if you go k grad f: but , the burned ones , when i say two or three years what i ' m saying is that i have had disks which are gone in a year . grad f: on the average , it 'll probably be three or four years . but , i you do n't want to per p have your only copy on a media that fails . grad f: and they do . , if you have them professionally pressed , y , they 're good for decades . phd i: so how about ? so so how about putting them on that plus , like on a on dat or some other medium that is n't risky ? grad f: th we can already put them on tape . and the tape is hi is very reliable . so the only issue is then if we need access to them . so that 's fine f if we do n't need access to them . phd i: , if you if they last say , they actually last , like , five years , in the typical case , and occasionally you might need to recreate one , and then you get your tape out , but otherwise you do n't . ca n't you just put them on ? grad h: so you just archive it on the tape , and then put it on cd as ? grad f: , you can do that but that 's pretty annoying , because the c ds are so slow . phd b: what 'd be is a system that re - burned the c ds every year . grad f: the the cd is an alternative to tape . icsi already has a perfectly good tape system and it 's more reliable . phd i: one one thing i do n't understand is , if you have the data if you if the meeting data is put on disk exactly once , then it 's backed - up once and the back - up system should never have to bother with it , more than once . grad f: , regardless , first of all there was , a problem with the archive in that i was every once in a while doing a chmod on all the directories an or recursive chmod and chown , because they were n't getting set correctly every once in a while , and i was just , doing a minus r star , not realizing that caused it to be re - backed - up . but normally you 're correct . but even without that , the back - up system is becoming saturated . phd i: but but this back - up system is smart enough to figure out that something has n't changed and does n't need to be backed - up again . professor d: the b th the at least the once tha that you put it on , it would kill that . grad f: , but we still have enough changed that the nightly back - ups are starting to take too long . grad f: it has nothing to do with the meeting . it 's just the general icsi back - up system is becoming saturated . phd i: so , what if we buy , what do they call these , high density ? grad f: , why do n't you have this have a this conversation with dave johnson tha rather than with me ? phd i: no , no . because this is maybe something that we can do without involving dave , and , putting more burden on him . how about we buy , uh , one of these high density tape drives ? and we put the data actually on non - backed - up disks . and we do our own back - up once and for all , and then and we do n't have to bother this @ up ? professor d: no , we have s we do n't we have our own ? something wi th that does n't that is n't used by the back - up gang ? do n't we have something downstairs ? grad f: but no , but andreas 's point is a good one . and we do n't have to do anything ourselves to do that . they 're already right now on tape . so your point is , and it 's a good one , that we could just get more disk and put it there . professor d: , that 's what i was gon na say , is that a disk is so cheap it 's es essentially , close to free . and the only thing that costs is the back - up issue , to first order . professor d: and we can take care of that by putting it on non - back up drives and just backing it up once onto this tape . phd g: so , who 's gon na do these back - ups ? the people that collect it ? grad f: , i 'll talk to dave , and see what th how { nonvocalsound } what the best way of doing that is . grad f: there 's a little utility that will manually burn a tape for you , and that 's probably the right way to do it . phd b: , and we should probably make that part of the procedure for recording the meetings . grad f: we 're g we 're gon na automate that . my intention is to do a script that 'll do everything . phd i: , but then you 're effectively using the resources of the back - up system . or is that a different tape robot ? phd i: , i ' m saying is @ i if you go to dave , and ask him " can i use your tape robot ? " , he will say , " that 's gon na screw up our back - up operation . " grad f: no , we wo n't . he 'll say " if that means that it 's not gon na be backed - up standardly , great . " professor d: he - i dave has promoted this in the past . so i do n't think he 's actually against it . grad f: , it 's just a utility which queues up . it just queues it up and when it 's available , it will copy it . and then you can tell it to then remove it from the disk or you can , do it a few days later or whatever you wanna do , after you confirm that it 's really backed - up . nw ? postdoc a: and if you did that during the day it would never make it to the nightly back - ups . phd i: , it if he you have to put the data on a non - backed - up disk to begin with . postdoc a: , but you can have it nw archive to you can have , a non - backed - up disk nw archived , grad f: and then it never which i ' m would make ever the sysadmins very happy . so , that 's a good idea . that 's what we should do . so , that means we 'll probably wanna convert all those files filesystems to non - backed - up media . professor d: , another , thing on the agenda said sri recognition experiments ? what 's that ? phd b: so , i ' m just playing with , the number of gaussians that we use in the recognizer , and phd i: , you have to sa you have to tell people that you 're doing you 're trying the tandem features . professor d: , i got confused by the results . it sai because , the meeting before , you said " ok , we got it down to where they 're within a tenth of a percent " . phd i: that was that was before i tried it on the females . see , women are nothi are , trouble . phd i: we had reached the point where , on the male portion of the development set , the , or one of the development sets , i should say the , the male error rate with , icsi plp features was identical with , sri features . which are mfcc . so , then , " , great . i 'll j i 'll just let 's make everything works on the females . " and the error rate , there was a three percent difference . so , phd i: , and the test data is callhome and switchboard . so , so then , and plus the vocal tract length normalization did n't actually made things worse . so something 's really wrong . so professor d: aha ! so but you see , now , between the males and the females , there 's certainly a much bigger difference in the scaling range , than there is , say , just within the males . and what you were using before was scaling factors that were just from the m the sri front - end . and that worked fine . professor d: , but now you 're looking over a larger range and it may not be so fine . phd i: d so the one thing that i then tried was to put in the low - pass filter , which we have in the so , most hub - five systems actually band - limit the , at about , thirty - seven hundred , hertz . although , normally , the channel goes to four thousand . and that actually helped , a little bit . and it did n't hurt on the males either . and i ' m now , trying the , and suddenly , also the v the vocal tract length normalization only in the test se on the test data . so , you can do vocal tract length normalization on the test data only or on both the training and the test . and you expect it to help a little bit if you do it only on the test , and s more if you do it on both training and test . and so the it now helps , if you do it only on the test , and i ' m currently retraining another set of models where it 's both in the training and the test , and then we 'll have , hopefully , even better results . but there 's it looks like there will still be some difference , maybe between one and two percent , for the females . and so , , i ' m open to suggestions . and it is true that the , we are using the but it ca n't be just the vtl , because if you do n't do vtl in both systems , , the females are considerably worse in the with the plp features . phd g: , what 's the standard ? , so the performance was actually a little better on females than males . phd i: , that ye overall , yes , but on this particular development test set , they 're actually a little worse . but that 's beside the point . we 're looking at the discrepancy between the sri system and the sri system when trained with icsi features . phd g: i ' m just wondering if that if you have any indication of your standard features , professor d: , is this , iterative , baum - welch training ? or is it viterbi training ? or ? phd i: , actually , we just do a s a fixed number of iterations . , in this case four . , which , we used to do only three , and then we found out we can squeeze and it was , we 're s we 're keeping it on the safe side . but you 're d it might be that one more iteration would help , but it 's phd i: no , but with baum - welch , there should n't be an over - fitting issue , really . professor d: it d if you remember some years ago bill byrne did a thing where he was looking at that , phd i: we can , that 's the easy one to check , because we save all the intermediate models professor d: , i ' m in each case how do you determine , the usual fudge factors ? the , the , language , scaling , acoustic scaling , phd i: i ' m actually re - optimizing them . although that has n't shown to make a big difference . professor d: and the pru the question he was asking at one point about pruning , remember that one ? professor d: , he was he 's it looked like the probabil at one point he was looking at the probabilities he was getting out at the likelihoods he was getting out of plp versus mel cepstrum , and they looked pretty different , phd i: . , the likelihoods are you ca n't directly compare them , because , for every set of models you compute a new normalization . and so these log probabilities , they are n't directly comparable because you have a different normalization constants for each model you train . professor d: but , still it 's a question if you have some threshold somewhere in terms of beam search , phd b: , if you have one threshold that works because the range of your likelihoods is in this area phd i: we prune very conservatively . , as we saw with the meeting data , we could probably tighten the pruning without really so we have a very open beam . professor d: but , you 're only talking about a percent or two . here we 're - we 're saying that we there gee , there 's this b , there 's this difference here . and it see cuz , i there could be lots of things . but but , let 's suppose just for a second that , we ' ve taken out a lot of the major differences , between the two . professor d: , we 're already using the mel scale and we 're using the same style filter integration , and , we 're making that low and high phd i: actually , there is the difference in that . so , for the plp features we use the triangular filter shapes . and for the in the sri front - end we use the trapezoidal one . phd i: , now it 's the same . it 's thirty to seven hundred and sixty hertz . professor d: before we i th with straight plp , it 's trapezoidal also . but then we had a slight difference in the scale . , so . phd i: since currently the feacalc program does n't allow me to change the filter shape independently of the scale . and , i did the experiment on the sri front - end where i tried the y where the standard used to be to use trapezoidal filters . you can actually continuously vary it between the two . and so i wen i swi i tried the trap , triangular ones . and it did slightly worse , but it 's really a small difference . phd i: , exactly . so , it 's not i do n't think the filter shape by itself will make a huge difference . professor d: so the oth the other thing that so , f i we ' ve always viewed it , anyway , as the major difference between the two , is actually in the smoothing , that the , plp , and the reason plp has been advantageous in , slightly noisy situations is because , plp does the smoothing at the end by an auto - regressive model , and mel cepstrum does it by just computing the lower cepstral coefficients . so , - . phd i: so one thing i have n't done yet is to actually do all of this with a much larger with our full training set . so right now , we 're using a i , forty ? i i it 's a f training set that 's about , , by a factor of four smaller than what we use when we train the full system . so , some of these smoothing issues are over - fitting for that matter . and the baum - welch should be much less of a factor , if you go full whole hog . phd i: and so , w so , just so the strategy is to first treat things with fast turn - around on a smaller training set and then , when you ' ve , narrowed it down , you try it on a larger training set . and so , we have n't done that yet . professor d: now the other que related question , though , is , what 's the boot models for these things ? phd i: th - th the boot models are trained from scratch . so we compute , so , we start with a , alil alignment that we computed with the b the best system we have . and and then we train from scratch . so we com we do a , w we collect the , the observations from those alignments under each of the feature sets that we train . and then , from there we do , there 's a lot of , actually the way it works , you first train a phonetically - tied mixture model . you do a total of first you do a context - independent ptm model . then you switch to a context you do two iterations of that . then you do two iterations of context - dependent phonetically - tied mixtures . and then from that you do the you go to a state - clustered model , and you do four iterations of that . so there 's a lot of iterations overall between your original boot models and the final models . i do n't think that we have never seen big differences . once " , now i have these much better models . i 'll re - generate my initial alignments . then i 'll get much better models at the end . " made no difference whatsoever . it 's it 's , i professor d: but , this for making things worse . this it migh th - the thought is possible another possible partial is if the boot models used a comple used a different feature set , that phd i: but there are no boot models , . you you 're not booting from initial models . you 're booting from initial alignments . professor d: , they will find boundaries a little differently , though , all th all that thing is actually slightly different . i 'd expect it to be a minor effect , phd i: so , we e w f w for a long time we had used boot alignments that had been trained with a with the same front - end but with acoustic models that were , like , fifteen percent worse than what we use now . and with a dict different dictionary with a considerably different dictionary , which was much less detailed and much less - suited . and so , then we switched to new boot alignments , which now had the benefit of all these improvements that we ' ve made over two years in the system . and , the result in the end was no different . so , what i ' m saying is , the exact nature of these boot alignments is probably not a big factor in the quality of the final models . professor d: , maybe not . but it i st still see it as , there 's a history to this , too , but i , i do n't wanna go into , but i th it could be the things that it the data is being viewed in a certain way , that a beginning is here rather than there and , because the actual signal - processing you 're doing is slightly different . but , it 's that 's probably not it . phd i: anyway , i should really reserve , any conclusions until we ' ve done it on the large training set , and until we ' ve seen the results with the vtl in training . professor d: at some point you also might wanna take the same thing and try it on , some broadcast news data else that actually has some noisy components , so we can see if any conclusions we come to holds across different data . grad f: . just what we were talking about before , which is that i ported a blass library to absinthe , and then got it working with fast - forward , and got a speedup roughly proportional to the number of processors times the clock cycle . so , that 's pretty good . , i ' m in the process of doing it for quicknet , but there 's something going wrong and it 's about half the speed that i was estimating it should be , and i ' m not why . but i 'll keep working on it . but the what it means is that it 's likely that for net training and forward passes , we 'll absinthe will be a good machine . especially if we get a few more processors and upgrade the processors . professor d: probably just throw away the old ones , and for the box , and i 'll just go buy their process . grad f: we 'd have to get a almost certainly have to get a , netfinity server . professor d: is is liz coming back , do , you do n't . see you . so , they 're having tea out there . so i the other thing that we were gon na talk about is , demo . and , so , these are the demos for the , july , meeting and , darpa mee professor d: , sixteenth , eighteenth . so , we talked about getting something together for that , but maybe , maybe we 'll just put that off for now , given that but maybe we should have a sub - meeting , , probably , adam and , chuck and me should talk about should get together and talk about that sometime soon . professor d: something like that . , , maybe we 'll involve dan ellis at some level as . ok . the the tea is going , so , i suggest we do , a unison . grad f: which is gon na be a little hard for a couple people because we have different digits forms . we have a i found a couple of old ones . grad f: so , the idea is just to read each line with a short pause between lines , grad f: not between and , since we 're in a hurry , we were just gon na read everyone all at once . so , if you sorta plug your ears and read grad f: so first read the transcript number , and then start reading the digits . ok ? one , two , three .
this meeting mainly outlines the progress of the meeting recorder project. in particular , the group discuss their preparation of materials for the transcriptions of digits by ibm , and also the human transcribers who are working towards preparing the set of 20 for the darpa meeting. other discussion focuses on the re-evaluation of recognition without cheating on segmentation , and also how sri recognition can be improved , especially for the female group. a number of issues regarding the management of data are addressed by the group: the inclusion of different data types in the corpus , and the storage and back-up of the group's data. progress has been made in naming conventions , with file reorganisation to be done at a later date , however this was not discussed fully due to chuck's absence. finally , absinthe is now up and running with improved performance. discussion of demos for the july darpa meeting were left to the individuals concerned. a number of data issues were resolved: after discussing the human-computer interaction smartkom data , the group decide that different types of data can be included in the meeting corpus , but that it should be structured into different directories according to data type. some of the data storage problems can be overcome by backing up using the nw archive. however , file reorganisation will be left until just before zero-level back up. the group decide to use ibm transcription of the digits , in addition to automatic methods. if ibm methods work , transcribers will check and comment these. discussion of demos for the july meeting has been postponed: the individuals concerned will meet independently. disk space is an issue for the group , especially in terms of back-up , which is 75% full: dvds or cds are unreliable and cannot be used , and although tape is reliable , this creates access issues. experimentation with the sri recognition have shown greater error when using tandem features , with vocal tract normalisation also making results worse. in particular more error is found with the female group which is 1-2% worse than the males. digit and beeps have been re-recorded by chuck to aid ibm transcription , and enable alignment. the group discuss the extent to which digits can be automatically transcribed , including their experimentation with forced alignment and speech recognition. transcription is progressing well: two transcribers hired and and two more will be hired soon. five "set 2" meetings are being edited by the head transcriber , and "set 3" are being prepared , with the aim of having 20 available for the darpa demo. pre-segmentation is very useful , with visual information desirable for transcription of backchannel behaviour. now thilo's segmenter is working , the group discuss re-evaluation of recognition without cheating on the segmentation , possibly using time-constrained alignment. also , they discuss the use of recogniser alignment to train the speech detector. the group discuss possible ways to improve the sri recognition error rate , suggestions include use of low-pass filter , or retraining models. differences in smoothing are proposed to be mainly responsible for the difference between the male and female results. file reorganisation was discussed briefly as chuck was not present , however progress has been made in sharing file naming conventions with uw. also , the absinthe machine is now working well , and has speeded up in proportion to its extra processors.
###dialogue: grad f: and , somewhere is my agenda . the most important thing is morgan wanted to talk about , the arpa demo . professor d: , so , here 's the thing . , why do n't we s again start off with , , i 'll get it . i 'll get the door . , we want to start off with the agenda . and then , given that , liz and andreas are gon na be ten , fifteen minutes late , we can try to figure out what we can do most effectively without them here . so so , one thing is , talk about demo , grad f: , we wanna talk about if w if we wanna add the data to the mar meeting recorder corpus . grad f: so why do n't we have that on the agenda and we 'll get to it and talk about it ? grad f: - . absinthe , which is the multiprocessor unix linux . it was andreas wanted to talk about segmentation and recognition , and update on sri recognition experiments . and then if ti if there 's time i wanted to talk about digits , but it looked like we were pretty full , so till next week . professor d: right . ok . , let 's see . the a certainly the segmentation and recognition we wanna maybe focus on when an - andreas is here since that was particularly his thing . professor d: smartkom also , andreas . absinthe , also he has been involved in a lot of those things . professor d: so , they 'll be inter i 'll be interested in all this , but , probably , if we had to pick something that we would talk on for ten minutes or so while they 're coming here . or i it would be , you think , reorganization status , or ? grad f: , chuck was the one who added out the agenda item . i do n't really have anything to say other than that we still have n't done it . phd b: maybe i said maybe we said this before just that we met and we talked about it and we have a plan for getting things organized and postdoc a: and i and a crucial part of that is the idea of not wanting to do it until right before the next level zero back - up so that there wo n't be huge number of added , grad f: although dave said that if we wanna do it , just tell him and he 'll do a d level zero then . grad f: so , we do need to talk a little bit about , we do n't need to do it during this meeting . we have a little more to discuss . but , we 're ready to do it . and , i have some web pages on ts more of the background . so , naming conventions and things like that , that i ' ve been trying to keep actually up to date . and i ' ve been sharing them with u - d uw folks also . postdoc a: i ' m , you ' ve been what ? showing them ? sharing them . professor d: , maybe , since that was a pretty short one , maybe we should talk about the ibm transcription status . someone can fill in liz and andreas later . grad f: so , we did another version of the beeps , where we separated each beeps with a spoken digit . chuck came up here and recorded some di himself speaking some digits , and so it just goes " beep one beep " and then the phrase , and then " beep two beep " and then the phrase . and that seems pretty good . , they 'll have a b easier time keeping track of where they are in the file . grad f: and we did it with the automatic segmentation , and i do n't think we ne we did n't look at it in detail . we just sent it to ibm . we we sorta spot - checked it . grad f: i sorta spot - checked here and there and it sounded pretty good . so . it 'll work . and , we 'll just hafta see what we get back from them . phd b: and the main thing will be if we can align what they give us with what we sent them . , that 's the crucial part . and we 'll be able to do that at with this new beep format . grad f: , so the problem wi last time is that there were errors in the transcripts where they put beeps where there were n't any , or and they put in extraneous beeps . and with the numbers there , it 's much less likely . phd b: , one interesting note is , or problem if this was just because of how i play it back , i say , snd - play and then the file , every once in a while , @ , like a beep sounds like it 's cut into two beeps . phd b: , and if that 's an , artifact of playback bu , i do n't think it 's probably in the original file . , but , phd b: but with this new format , that hopefully they 're not hearing that , and if they are , it should n't throw them . so . grad f: , maybe we better listen to it again , make , but , certainly the software should n't do that , phd b: i it 's probably just , mmm , somehow the audio device gets hung for a second , postdoc a: as long as they have one number , and they know that there 's only one beep maximum that goes with that number . grad f: the only the only part that might be confusing is when chuck is reading digits . grad f: yes . because , we do n't we did n't in order to cut them out we 'd have to listen to it . grad f: and we wanted to avoid doing that , so we they are transcribing the digits . grad f: although we could tell them , if you hear someone reading a digits string just say " bracket digit bracket " and do n't bother actually computing the di writing down the digits . postdoc a: that 'd be great . that 'd be what i ' m having the transcribers here do , cuz it can be extracted later . grad f: and then i wanted to talk about but as i said i we may not have time what we should do about digits . we have a whole pile of digits that have n't been transcribed . professor d: le - let 's talk about it , because that 's something that i know andreas is less interested in than liz is , so , . it 's good phd b: , brian i sent bresset sent brian a message about the meeting and i have n't heard back yet . so . i g hope he got it and hopefully he 's maybe he 's gone , he did n't even reply to my message . so . i should probably ping him just to make that he got it . grad f: alright . so , we have a whole bunch of digits , if we wanna move on to digits . professor d: actually , maybe i one one relate more related thing in transcription . so that 's the ibm . we ' ve got that sorted out . , how 're we doing on the rest of it ? postdoc a: we 're doing . i hire i ' ve hired two extra people already , expect to hire two more . and , i ' ve prepared , , a set of five which i ' m calling set two , which are now being edited by my head transcriber , in terms of spelling errors and all that . she 's also checking through and mar and monitoring , the transcription of another transcriber . , she 's going through and doing these kinds of checks . postdoc a: and , i ' ve moved on now to what i ' m calling set three . if i do it in sets groups of five , then have , like , a parallel processing through the current . and and you indicated to me that we have a g a goal now , for the , { nonvocalsound } the , darpa demo , of twenty hours . so , i ' m gon na go up to twenty hours , be that everything gets processed , and released , and that 's what my goal is . package of twenty hours right now , and then once that 's done , move on to the next . professor d: , so twenty hours . but i the other thing is that , that 's kinda twenty hours asap because the longer before the demo we actually have the twenty hours , the more time it 'll be for people to actually do things with it . postdoc a: they would like to do it full - time , several of these people . and and i do n't think it 's possible , really , to do this full - time , but , that what it shows is motivation to do as many hours as possible . professor d: , i the so the difference if , if the ibm works out , the difference in the job would be that they p primarily would be checking through things that were already done by someone else ? grad f: correcting . we 'll we 'll expect that they 'll have to move some time bins and do some corrections . postdoc a: and i , i ' ve also d , discovered so with the new transcriber i ' m so , lemme say that my , so , at present , the people have been doing these transcriptions a channel at a time . and , that , is useful , and t , and then once in a while they 'll have to refer to the other channels to clear something up . , i realize that , w i we 're using the pre - segmented version , and , the pre - segmented version is extremely useful , and would n't it be , useful also to have the visual representation of those segments ? and so i ' ve , i ' ve trained the new one , the new the newest one , to , use the visual from the channel that is gon na be transcribed at any given time . and that 's just amazingly helpful . because what happens then , is you scan across the signal and once in a while you 'll find a blip that did n't show up in the pre - segmentation . postdoc a: once in a while it 's a backchannel . sometimes it seems to be , similar to the ones that are being picked up . and they 're rare events , but you can really go through a meeting very quickly . you just you just , yo you s you scroll from screen to screen , looking for blips . and , that we 're gon na end up with , better coverage of the backchannels , but at the same time we 're benefitting tremendously from the pre - segmentation because there are huge places where there is just no activity . and , the audio quality is so good postdoc a: so that 's gon na , also , speed the efficiency of this part of the process . professor d: , . so , so let 's talk about the digits , since they 're not here yet . grad f: , so , we have a whole bunch of digits that we ' ve read and we have the forms and so on , but only a small number of that ha , not a small number only a subset of that has been transcribed . and so we need to decide what we wanna do . and , liz and andreas actually they 're not here , but , they did say at one point that they thought they could do a pretty good job of just doing a forced alignment . and , again , i do n't think we 'll be able to do with that alone , because , sometimes people correct themselves and things like that . but so , i was just wondering what people thought about how automated can we make the process of finding where the people read the digits , doing a forced alignment , and doing the timing . phd b: you 're talking about as a pre - processing step . right , morgan ? is that what you 're ? professor d: , i ' m not quite what i ' m talking about . i , we 're talking about digits now . and and so , there 's a bunch of that has n't been marked yet . and , there 's the issue that they we was said , but do we ? professor d: because people make mistakes and . i was just asking , just out of curiosity , if with , the sri recognizer getting one percent word error , would we do better ? so , if you do a forced alignment but the force but the transcription you have is wrong because they actually made mistakes , or false starts , it 's much less c it 's much less common than one percent ? professor d: right ? so that 's just my question . i ' m not saying it should be one way or the other , but it 's if grad f: but , there 're a couple different of doing it . we could use the tools i ' ve already developed and transcribe it . hire some people , or use the transcribers to do it . we could let ibm transcribe it . , they 're doing it anyway , and unless we tell them different , they 're gon na transcribe it . , or we could try some automated methods . and my tendency right now is , if ibm comes back with this meeting and the transcript is good , just let them do it . professor d: , it 's y you raised a point , , euphemistically but , m maybe it is a serious problem . ho - what will they do when they go hear " beep seven three five two " , you think they 'll we 'll get ? postdoc a: it , it 'd be preceded by " i ' m reading transcript so - and - so " ? so , if they 're processing it at grad f: , it 'll be it will be in the midst of a digit string . so it , there might be a place where it 's " beep seven beep eight beep " . but , they 're gon na macros for inserting the beep marks . and so , i do n't think it 'll be a problem . we 'll have to see , but i do n't think it 's gon na be a problem . professor d: , i , that 's if they are going to transcribe these things , certainly any process that we 'd have to correct them , or whatever is needs to be much less elaborate for digits than for other . so , why not ? that was it ? grad f: that was it . just , what do we do with digits ? we have so many of them , and it 'd be to actually do something with them . professor d: i in berkeley , . so , you you have to go a little early , right ? at twenty professor d: alright . so le let 's make we do the ones that , saved you . professor d: so there was some in in adam 's agenda list , he had something from you about segmentation this last recognition ? phd i: , . so this is just partly to inform everybody , and to get , input . , so , { nonvocalsound } , we had a discussion don and liz and i had discussion last week about how to proceed with , , with don 's work , phd i: and , one of the obvious things that occur to us was that we 're since we now have thilo 's segmenter and it works , amazingly , we should actually re - evaluate the recognition , results using , without cheating on the segmentations . phd e: and how do we find the transcripts for those so that ? the references for those segments ? phd i: no , actually , nist has , m a fairly sophisticated scoring program that you can give a , a time , phd i: , you just give two time - marked sequences of words , and it computes the , the th phd i: it does all the work for you . so , it we just and we use that actually in hub - five to do the scoring . so what we ' ve been using so far was a simplified version of the scoring . and we can handle the type of problem we have here . phd i: it does time - constrained word - alignment . so that should be possible . that should n't be a problem . , so that was the one thing , and the other was that , what was the other problem ? ! that thilo wanted to use the recognizer alignments to train up his , speech detector . , so that we could use , there would n't be so much hand labelling needed to , to generate training data for the speech detector . phd e: so , it 's s , eight meetings which i ' m using , and , it 's before it was twenty minutes of one meeting . so should be a little bit better . phd g: actually , i had a question about that . if you find that you can lower the false alarms that you get where there 's no speech , that would be useful for us to know . so , phd g: so , r right now you get f fal , false , speech regions when it 's just like , breath like that , and i 'd be interested to know the wha if you retrain , do those actually go down or not ? because of phd e: i 'll can make an can , like , make a c comparison of the old system to the new one , and then phd g: , just to see if by doing nothing in the modeling of just having that training data wh what happens . professor d: another one that we had on adam 's agenda that definitely involved you was s something about smartkom ? grad f: so , rob porzel , porzel ? and the , porzel and the , smartkom group are collecting some dialogues . grad f: they have one person sitting in here , looking at a picture , and a wizard sitting in another room somewhere . and , they 're doing a travel task . and , it involves starting i believe starting with a it 's it 's always the wizard , but it starts where the wizard is pretending to be a computer and it goes through a , speech generation system . grad f: synthesis system . , and then , it goes to a real wizard and they 're evaluating that . and they wanted to use this equipment , and so the w question came up , is , here 's some more data . should this be part of the corpus or not ? and my attitude was yes , because there might be people who are using this corpus for acoustics , as opposed to just for language . , or also for dialogue of various sorts . , so it 's not a meeting . right ? because it 's two people and they 're not face to face . professor d: a minute . so , wanted to understand it , cuz i ' m , had n't quite followed this process . so , it 's wizard in the sen usual sense that the person who is asking the questions does n't know that it 's , a machi not a machine ? phd i: actually actually , w the we do this who came up with it , but it 's a really clever idea . we simulate a computer breakdown halfway through the session , and so then after that , the person 's told that they 're now talking to a , to a human . phd i: so , we collect both human - computer and human - human data , essentially , in the same session . professor d: you might wanna try collecting it the other way around sometime , saying that th the computer is n't up yet and then so then you can separate it out whether it 's the beginning or end effects . grad f: i have to go now . bmr024fdialogueact569 11878 118893 f grad s^j -1 0 you can talk to the computer . phd b: if you tell them that they 're the computer part is running on a windows machine . and the whole breakdown thing kinda makes sense . grad f: and i said , " that 's silly , if we 're gon na try to do it for a corpus , there might be people who are interested in acoustics . " phd i: no , the question is do we save one or two far - field channels or all of them ? grad f: i see no reason not to do all of them . that that if we have someone who is doing acoustic studies , it 's to have the same for every recording . professor d: so , what is the purpose of this recording ? this is to get acoustic and language model training data for smartkom ok . phd i: it 's to be traini to b training data and development data for the smartkom system . phd e: but but i ' m not about the legal aspect of that . is is there some contract with smartkom about the data ? what they or , is that our data which we are collecting here , professor d: we ' ve never signed anything that said that we could n't use anything that we did . professor d: no that 's not a problem . i l look , it seems to me that if we 're doing it anyway and we 're doing it for these purposes that we have , and we have these distant mikes , we definitely should re should save it all as long as we ' ve got disk space , and disk is pretty cheap . professor d: now th so we save it because it 's potentially useful . and now , what do we do with it is a s separate question . , anybody who 's training something up could choose to put it , to u include this or not . i would not say it was part of the meetings corpus . it is n't . but it 's some other data we have , and if somebody doing experiment wants to train up including that then they can . grad f: so it 's it it i it the begs the question of what is the meeting corpus . so if , at uw they start recording two - person hallway conversations is that part of the meeting corpus ? professor d: it 's i th think the idea of two or more people conversing with one another is key . phd g: what if we just give it a name like we give these meetings a name ? phd g: and then later on some people will consider it a meeting and some people wo n't , grad f: that was my intention . so so s so part of the reason that i wanted to bring this up is , do we wanna handle it as a special case or do we wanna fold it in , grad f: we give everyone who 's involved as their own user id , give it session i ds , let all the tools that handle meeting recorder handle it , or do we wanna special case it ? and if we were gon na special case it , who 's gon na do that ? phd i: , it makes sense to handle it with the same infrastructure , since we do n't want to duplicate things unnecessarily . phd i: but as far as distributing it , we should n't label it as part of this meeting corpus . we should let it be its own corp postdoc a: i ha i have an extra point , which is the naturalness issue . because we have , like , meetings that have a reason . that 's one of the reasons that we were talking about this . and and those and this sounds like it 's more of an experimental setup . it 's got a different purpose . professor d: it 's scenario - based , it 's human - computer interface it 's really pretty different . but i have no problem with somebody folding it in for some experiment they 're gon na do , but i do n't it does n't match anything that we ' ve described about meetings . whereas everything that we talked about them doing at uw and really does . they 're actually talking grad f: so w so what does that mean for how we are gon na organize things ? professor d: you can you can again , as andreas was saying , if you wanna use the same tools and the same conventions , there 's no problem with that . it 's just that it 's , different directory , it 's called something different , it 's it is different . you ca n't just fold it in as if it 's , digits are different , too . grad f: and it 's just you just mark the transcripts differently . so so one option is you fold it in , and just simply in the file you mark somewhere that this is this type of interaction , rather than another type of interaction . professor d: , i don i would n't call reading digits " meetings " . , we were doing grad f: , but , i put it under the same directory tree . , it 's in " user doctor speech data mr " . grad f: my preference is to have a single procedure so that i do n't have to think too much about things . professor d: o - you you can use whatever procedure you want that 's p convenient for you . grad f: if we do it any other way that means that we need a separate procedure , and someone has to do that . professor d: all i ' m saying is that there 's no way that we 're gon na tell people that reading digits is meetings . and similarly we 're not gon na tell them that someone talking to a computer to get travel information is meetings . those are n't meetings . but if it makes it easier for you to pu fold them in the same procedures and have them under the same directory tree , knock yourself out . phd b: there 's a couple other questions that i have too , and one of them is , what about , consent issues ? and the other one is , what about transcription ? are ? phd i: that 's a that 's another argument to keep it separate , because it 's gon na follow the smartkom transcription conventions and not the icsi meeting transcription conventions . professor d: but i ' m no one would have a problem with our folding it in for some acoustic modeling or some things . do we h do we have , , american - born folk , reading german , pla , place names and ? is that ? grad f: disk might eventually be an issue so we might need to , get some more disk pretty soon . grad f: we 're probably a little more than that because we 're using up some space that we should n't be on . so , once everything gets converted over to the disks we 're supposed to be using we 'll be probably , seventy - five percent . phd b: , when i was looking for space for thilo , i found one disk that had , it was nine gigs and another one had seventeen . and everything else was sorta committed . grad f: i ' m much more concerned about the backed - up . the non - backed - up , grad f: i is cheap . , if we need to we can buy a disk , hang it off a s , workstation . if it 's not backed - up the sysadmins do n't care too much . professor d: so , anytime we need a disk , we can get it at the rate that we 're phd i: you can i should n't be saying this , but , you can just , since the back - ups are every night , you can recycle the backed - up diskspace . professor d: da - we had allowed dave to listen to these , recordings . i me and there 's been this conversation going on about getting another file server , and we can do that . we 'll take the opportunity and get another big raft of disk , i . phd i: , i think there 's an argument for having , you could use our old file server for disks that have data that is very rarely accessed , and then have a fast new file server for data that is , heavily accessed . grad f: my understanding is , the issue is n't really the file server . we could always put more disks on . phd b: i think the file server could become an issue as we get a whole bunch more new compute machines . phd b: and we ' ve got , fifty machines trying to access data off of abbott at once . phd i: , i think we ' ve raised this before and someone said this is not a reliable way to do it , but the what about putting the on , like , c - cd - rom or dvd ? grad f: that was me . i was the one who said it was not reliable . the - they wear out . the the th phd i: but they wear out just from sitting on the shelf ? or from being read and read ? grad f: no . read and write do n't hurt them too much unless you scratch them . but the r the write once , and the read - writes , do n't last . so you do n't wa you do n't wanna put ir un reproduceable data on them . phd i: but if that then you would think you 'd hear much more clamoring about data loss and grad f: they 're on cd , but they 're not tha that 's not the only source . grad f: they have them on disk . and they burn new ones every once in a while . but if you go k grad f: but , the burned ones , when i say two or three years what i ' m saying is that i have had disks which are gone in a year . grad f: on the average , it 'll probably be three or four years . but , i you do n't want to per p have your only copy on a media that fails . grad f: and they do . , if you have them professionally pressed , y , they 're good for decades . phd i: so how about ? so so how about putting them on that plus , like on a on dat or some other medium that is n't risky ? grad f: th we can already put them on tape . and the tape is hi is very reliable . so the only issue is then if we need access to them . so that 's fine f if we do n't need access to them . phd i: , if you if they last say , they actually last , like , five years , in the typical case , and occasionally you might need to recreate one , and then you get your tape out , but otherwise you do n't . ca n't you just put them on ? grad h: so you just archive it on the tape , and then put it on cd as ? grad f: , you can do that but that 's pretty annoying , because the c ds are so slow . phd b: what 'd be is a system that re - burned the c ds every year . grad f: the the cd is an alternative to tape . icsi already has a perfectly good tape system and it 's more reliable . phd i: one one thing i do n't understand is , if you have the data if you if the meeting data is put on disk exactly once , then it 's backed - up once and the back - up system should never have to bother with it , more than once . grad f: , regardless , first of all there was , a problem with the archive in that i was every once in a while doing a chmod on all the directories an or recursive chmod and chown , because they were n't getting set correctly every once in a while , and i was just , doing a minus r star , not realizing that caused it to be re - backed - up . but normally you 're correct . but even without that , the back - up system is becoming saturated . phd i: but but this back - up system is smart enough to figure out that something has n't changed and does n't need to be backed - up again . professor d: the b th the at least the once tha that you put it on , it would kill that . grad f: , but we still have enough changed that the nightly back - ups are starting to take too long . grad f: it has nothing to do with the meeting . it 's just the general icsi back - up system is becoming saturated . phd i: so , what if we buy , what do they call these , high density ? grad f: , why do n't you have this have a this conversation with dave johnson tha rather than with me ? phd i: no , no . because this is maybe something that we can do without involving dave , and , putting more burden on him . how about we buy , uh , one of these high density tape drives ? and we put the data actually on non - backed - up disks . and we do our own back - up once and for all , and then and we do n't have to bother this @ up ? professor d: no , we have s we do n't we have our own ? something wi th that does n't that is n't used by the back - up gang ? do n't we have something downstairs ? grad f: but no , but andreas 's point is a good one . and we do n't have to do anything ourselves to do that . they 're already right now on tape . so your point is , and it 's a good one , that we could just get more disk and put it there . professor d: , that 's what i was gon na say , is that a disk is so cheap it 's es essentially , close to free . and the only thing that costs is the back - up issue , to first order . professor d: and we can take care of that by putting it on non - back up drives and just backing it up once onto this tape . phd g: so , who 's gon na do these back - ups ? the people that collect it ? grad f: , i 'll talk to dave , and see what th how { nonvocalsound } what the best way of doing that is . grad f: there 's a little utility that will manually burn a tape for you , and that 's probably the right way to do it . phd b: , and we should probably make that part of the procedure for recording the meetings . grad f: we 're g we 're gon na automate that . my intention is to do a script that 'll do everything . phd i: , but then you 're effectively using the resources of the back - up system . or is that a different tape robot ? phd i: , i ' m saying is @ i if you go to dave , and ask him " can i use your tape robot ? " , he will say , " that 's gon na screw up our back - up operation . " grad f: no , we wo n't . he 'll say " if that means that it 's not gon na be backed - up standardly , great . " professor d: he - i dave has promoted this in the past . so i do n't think he 's actually against it . grad f: , it 's just a utility which queues up . it just queues it up and when it 's available , it will copy it . and then you can tell it to then remove it from the disk or you can , do it a few days later or whatever you wanna do , after you confirm that it 's really backed - up . nw ? postdoc a: and if you did that during the day it would never make it to the nightly back - ups . phd i: , it if he you have to put the data on a non - backed - up disk to begin with . postdoc a: , but you can have it nw archive to you can have , a non - backed - up disk nw archived , grad f: and then it never which i ' m would make ever the sysadmins very happy . so , that 's a good idea . that 's what we should do . so , that means we 'll probably wanna convert all those files filesystems to non - backed - up media . professor d: , another , thing on the agenda said sri recognition experiments ? what 's that ? phd b: so , i ' m just playing with , the number of gaussians that we use in the recognizer , and phd i: , you have to sa you have to tell people that you 're doing you 're trying the tandem features . professor d: , i got confused by the results . it sai because , the meeting before , you said " ok , we got it down to where they 're within a tenth of a percent " . phd i: that was that was before i tried it on the females . see , women are nothi are , trouble . phd i: we had reached the point where , on the male portion of the development set , the , or one of the development sets , i should say the , the male error rate with , icsi plp features was identical with , sri features . which are mfcc . so , then , " , great . i 'll j i 'll just let 's make everything works on the females . " and the error rate , there was a three percent difference . so , phd i: , and the test data is callhome and switchboard . so , so then , and plus the vocal tract length normalization did n't actually made things worse . so something 's really wrong . so professor d: aha ! so but you see , now , between the males and the females , there 's certainly a much bigger difference in the scaling range , than there is , say , just within the males . and what you were using before was scaling factors that were just from the m the sri front - end . and that worked fine . professor d: , but now you 're looking over a larger range and it may not be so fine . phd i: d so the one thing that i then tried was to put in the low - pass filter , which we have in the so , most hub - five systems actually band - limit the , at about , thirty - seven hundred , hertz . although , normally , the channel goes to four thousand . and that actually helped , a little bit . and it did n't hurt on the males either . and i ' m now , trying the , and suddenly , also the v the vocal tract length normalization only in the test se on the test data . so , you can do vocal tract length normalization on the test data only or on both the training and the test . and you expect it to help a little bit if you do it only on the test , and s more if you do it on both training and test . and so the it now helps , if you do it only on the test , and i ' m currently retraining another set of models where it 's both in the training and the test , and then we 'll have , hopefully , even better results . but there 's it looks like there will still be some difference , maybe between one and two percent , for the females . and so , , i ' m open to suggestions . and it is true that the , we are using the but it ca n't be just the vtl , because if you do n't do vtl in both systems , , the females are considerably worse in the with the plp features . phd g: , what 's the standard ? , so the performance was actually a little better on females than males . phd i: , that ye overall , yes , but on this particular development test set , they 're actually a little worse . but that 's beside the point . we 're looking at the discrepancy between the sri system and the sri system when trained with icsi features . phd g: i ' m just wondering if that if you have any indication of your standard features , professor d: , is this , iterative , baum - welch training ? or is it viterbi training ? or ? phd i: , actually , we just do a s a fixed number of iterations . , in this case four . , which , we used to do only three , and then we found out we can squeeze and it was , we 're s we 're keeping it on the safe side . but you 're d it might be that one more iteration would help , but it 's phd i: no , but with baum - welch , there should n't be an over - fitting issue , really . professor d: it d if you remember some years ago bill byrne did a thing where he was looking at that , phd i: we can , that 's the easy one to check , because we save all the intermediate models professor d: , i ' m in each case how do you determine , the usual fudge factors ? the , the , language , scaling , acoustic scaling , phd i: i ' m actually re - optimizing them . although that has n't shown to make a big difference . professor d: and the pru the question he was asking at one point about pruning , remember that one ? professor d: , he was he 's it looked like the probabil at one point he was looking at the probabilities he was getting out at the likelihoods he was getting out of plp versus mel cepstrum , and they looked pretty different , phd i: . , the likelihoods are you ca n't directly compare them , because , for every set of models you compute a new normalization . and so these log probabilities , they are n't directly comparable because you have a different normalization constants for each model you train . professor d: but , still it 's a question if you have some threshold somewhere in terms of beam search , phd b: , if you have one threshold that works because the range of your likelihoods is in this area phd i: we prune very conservatively . , as we saw with the meeting data , we could probably tighten the pruning without really so we have a very open beam . professor d: but , you 're only talking about a percent or two . here we 're - we 're saying that we there gee , there 's this b , there 's this difference here . and it see cuz , i there could be lots of things . but but , let 's suppose just for a second that , we ' ve taken out a lot of the major differences , between the two . professor d: , we 're already using the mel scale and we 're using the same style filter integration , and , we 're making that low and high phd i: actually , there is the difference in that . so , for the plp features we use the triangular filter shapes . and for the in the sri front - end we use the trapezoidal one . phd i: , now it 's the same . it 's thirty to seven hundred and sixty hertz . professor d: before we i th with straight plp , it 's trapezoidal also . but then we had a slight difference in the scale . , so . phd i: since currently the feacalc program does n't allow me to change the filter shape independently of the scale . and , i did the experiment on the sri front - end where i tried the y where the standard used to be to use trapezoidal filters . you can actually continuously vary it between the two . and so i wen i swi i tried the trap , triangular ones . and it did slightly worse , but it 's really a small difference . phd i: , exactly . so , it 's not i do n't think the filter shape by itself will make a huge difference . professor d: so the oth the other thing that so , f i we ' ve always viewed it , anyway , as the major difference between the two , is actually in the smoothing , that the , plp , and the reason plp has been advantageous in , slightly noisy situations is because , plp does the smoothing at the end by an auto - regressive model , and mel cepstrum does it by just computing the lower cepstral coefficients . so , - . phd i: so one thing i have n't done yet is to actually do all of this with a much larger with our full training set . so right now , we 're using a i , forty ? i i it 's a f training set that 's about , , by a factor of four smaller than what we use when we train the full system . so , some of these smoothing issues are over - fitting for that matter . and the baum - welch should be much less of a factor , if you go full whole hog . phd i: and so , w so , just so the strategy is to first treat things with fast turn - around on a smaller training set and then , when you ' ve , narrowed it down , you try it on a larger training set . and so , we have n't done that yet . professor d: now the other que related question , though , is , what 's the boot models for these things ? phd i: th - th the boot models are trained from scratch . so we compute , so , we start with a , alil alignment that we computed with the b the best system we have . and and then we train from scratch . so we com we do a , w we collect the , the observations from those alignments under each of the feature sets that we train . and then , from there we do , there 's a lot of , actually the way it works , you first train a phonetically - tied mixture model . you do a total of first you do a context - independent ptm model . then you switch to a context you do two iterations of that . then you do two iterations of context - dependent phonetically - tied mixtures . and then from that you do the you go to a state - clustered model , and you do four iterations of that . so there 's a lot of iterations overall between your original boot models and the final models . i do n't think that we have never seen big differences . once " , now i have these much better models . i 'll re - generate my initial alignments . then i 'll get much better models at the end . " made no difference whatsoever . it 's it 's , i professor d: but , this for making things worse . this it migh th - the thought is possible another possible partial is if the boot models used a comple used a different feature set , that phd i: but there are no boot models , . you you 're not booting from initial models . you 're booting from initial alignments . professor d: , they will find boundaries a little differently , though , all th all that thing is actually slightly different . i 'd expect it to be a minor effect , phd i: so , we e w f w for a long time we had used boot alignments that had been trained with a with the same front - end but with acoustic models that were , like , fifteen percent worse than what we use now . and with a dict different dictionary with a considerably different dictionary , which was much less detailed and much less - suited . and so , then we switched to new boot alignments , which now had the benefit of all these improvements that we ' ve made over two years in the system . and , the result in the end was no different . so , what i ' m saying is , the exact nature of these boot alignments is probably not a big factor in the quality of the final models . professor d: , maybe not . but it i st still see it as , there 's a history to this , too , but i , i do n't wanna go into , but i th it could be the things that it the data is being viewed in a certain way , that a beginning is here rather than there and , because the actual signal - processing you 're doing is slightly different . but , it 's that 's probably not it . phd i: anyway , i should really reserve , any conclusions until we ' ve done it on the large training set , and until we ' ve seen the results with the vtl in training . professor d: at some point you also might wanna take the same thing and try it on , some broadcast news data else that actually has some noisy components , so we can see if any conclusions we come to holds across different data . grad f: . just what we were talking about before , which is that i ported a blass library to absinthe , and then got it working with fast - forward , and got a speedup roughly proportional to the number of processors times the clock cycle . so , that 's pretty good . , i ' m in the process of doing it for quicknet , but there 's something going wrong and it 's about half the speed that i was estimating it should be , and i ' m not why . but i 'll keep working on it . but the what it means is that it 's likely that for net training and forward passes , we 'll absinthe will be a good machine . especially if we get a few more processors and upgrade the processors . professor d: probably just throw away the old ones , and for the box , and i 'll just go buy their process . grad f: we 'd have to get a almost certainly have to get a , netfinity server . professor d: is is liz coming back , do , you do n't . see you . so , they 're having tea out there . so i the other thing that we were gon na talk about is , demo . and , so , these are the demos for the , july , meeting and , darpa mee professor d: , sixteenth , eighteenth . so , we talked about getting something together for that , but maybe , maybe we 'll just put that off for now , given that but maybe we should have a sub - meeting , , probably , adam and , chuck and me should talk about should get together and talk about that sometime soon . professor d: something like that . , , maybe we 'll involve dan ellis at some level as . ok . the the tea is going , so , i suggest we do , a unison . grad f: which is gon na be a little hard for a couple people because we have different digits forms . we have a i found a couple of old ones . grad f: so , the idea is just to read each line with a short pause between lines , grad f: not between and , since we 're in a hurry , we were just gon na read everyone all at once . so , if you sorta plug your ears and read grad f: so first read the transcript number , and then start reading the digits . ok ? one , two , three . ###summary: this meeting mainly outlines the progress of the meeting recorder project. in particular , the group discuss their preparation of materials for the transcriptions of digits by ibm , and also the human transcribers who are working towards preparing the set of 20 for the darpa meeting. other discussion focuses on the re-evaluation of recognition without cheating on segmentation , and also how sri recognition can be improved , especially for the female group. a number of issues regarding the management of data are addressed by the group: the inclusion of different data types in the corpus , and the storage and back-up of the group's data. progress has been made in naming conventions , with file reorganisation to be done at a later date , however this was not discussed fully due to chuck's absence. finally , absinthe is now up and running with improved performance. discussion of demos for the july darpa meeting were left to the individuals concerned. a number of data issues were resolved: after discussing the human-computer interaction smartkom data , the group decide that different types of data can be included in the meeting corpus , but that it should be structured into different directories according to data type. some of the data storage problems can be overcome by backing up using the nw archive. however , file reorganisation will be left until just before zero-level back up. the group decide to use ibm transcription of the digits , in addition to automatic methods. if ibm methods work , transcribers will check and comment these. discussion of demos for the july meeting has been postponed: the individuals concerned will meet independently. disk space is an issue for the group , especially in terms of back-up , which is 75% full: dvds or cds are unreliable and cannot be used , and although tape is reliable , this creates access issues. experimentation with the sri recognition have shown greater error when using tandem features , with vocal tract normalisation also making results worse. in particular more error is found with the female group which is 1-2% worse than the males. digit and beeps have been re-recorded by chuck to aid ibm transcription , and enable alignment. the group discuss the extent to which digits can be automatically transcribed , including their experimentation with forced alignment and speech recognition. transcription is progressing well: two transcribers hired and and two more will be hired soon. five "set 2" meetings are being edited by the head transcriber , and "set 3" are being prepared , with the aim of having 20 available for the darpa demo. pre-segmentation is very useful , with visual information desirable for transcription of backchannel behaviour. now thilo's segmenter is working , the group discuss re-evaluation of recognition without cheating on the segmentation , possibly using time-constrained alignment. also , they discuss the use of recogniser alignment to train the speech detector. the group discuss possible ways to improve the sri recognition error rate , suggestions include use of low-pass filter , or retraining models. differences in smoothing are proposed to be mainly responsible for the difference between the male and female results. file reorganisation was discussed briefly as chuck was not present , however progress has been made in sharing file naming conventions with uw. also , the absinthe machine is now working well , and has speeded up in proportion to its extra processors.
18
professor d: we 're on ? yes , . , we 're testing noise robustness but let 's not get silly . ok , so , you ' ve got some , xerox things to pass out ? phd a: , i ' m for the table , but as it grows in size , it . professor d: when you get older you have these different perspectives . , lowering the word hour rate is fine , but having big font ! phd a: and also since the we ' ve started to run work on this . so since last week we ' ve started to fill the column with features w with nets trained on plp with on - line normalization but with delta also , because the column was not completely phd a: , it 's still not completely filled , but we have more results to compare with network using without plp and finally , hhh , ehhh pl - delta seems very important . i . if you take , let 's say , anyway aurora - two - b , so , the next t the second , part of the table , phd a: when we use the large training set using french , spanish , and english , you have one hundred and six without delta and eighty - nine with the delta . professor d: a all of these numbers are with a hundred percent being , the baseline performance , phd a: . so now we see that the gap between the different training set is much smaller phd a: but , actually , for english training on timit is still better than the other languages . and mmm , . and f also for italian , actually . if you take the second set of experiment for italian , so , the mismatched condition , when we use the training on timit so , it 's multi - english , we have a ninety - one number , and training with other languages is a little bit worse . phd a: and , and here the gap is still more important between using delta and not using delta . if y if i take the training s the large training set , it 's we have one hundred and seventy - two , and one hundred and four when we use delta . even if the contexts used is quite the same , because without delta we use seventeenths seventeen frames . , so the second point is that we have no single cross - language experiments , that we did not have last week . , so this is training the net on french only , or on english only , and testing on italian . and training the net on french only and spanish only and testing on , ti - digits . and , fff , what we see is that these nets are not as good , except for the multi - english , which is always one of the best . then we started to work on a large dat database containing , sentences from the french , from the spanish , from the timit , from spine , from english digits , and from italian digits . so this is the another line another set of lines in the table . , @ with spine and , actually we did this before knowing the result of all the data , so we have to redo the experiment training the net with , plp , but with delta . but this net performed quite . it 's better than the net using french , spanish , and english only . we have also started feature combination experiments . many experiments using features and net outputs together . and this is the results are on the other document . , we can discuss this after , perhaps , just , @ . , so there are four systems . the first one , is combining , two feature streams , using and each feature stream has its own mpl . so it 's the similar to the tandem that was proposed for the first . the multi - stream tandem for the first proposal . the second is using features and klt transformed mlp outputs . and the third one is to u use a single klt trans transform features as mlp outputs . mmm . you can comment these results , phd b: yes , s i would like to say that , , mmm , if we does n't use the delta - delta , we have an improve when we use s some combination . but when phd b: and first in the experiment - one i do i use different mlp , and is that the multi - english mlp is the better . for the ne rest of experiment i use multi - english , only multi - english . and i try to combine different type of feature , but the result is that the msg - three feature does n't work for the italian database because never help to increase the accuracy . phd a: , actually , if w we look at the table , the huge table , we see that for ti - digits msg perform as the plp , but this is not the case for italian what where the error rate is c is almost twice the error rate of plp . so , , i do n't think this is a bug but this is something in probably in the msg process that i what exactly . perhaps the fact that the there 's no low - pass filter , or no pre - emp pre - emphasis filter and that there is some dc offset in the italian , or , something simple like that . but that we need to sort out if want to get improvement by combining plp and msg because for the moment msg do does n't bring much information . and as carmen said , if we combine the two , we have the result , of plp . professor d: i , the , baseline system when you said the baseline system was , eighty - two percent , that was trained on what and tested on what ? that was , italian mismatched d , digits , is the testing , and the training is italian digits ? so the " mismatch " just refers to the noise and , microphone and , right ? so , did we have so would that then correspond to the first line here of where the training is the italian digits ? professor d: training of the net , so what that says is that in a matched condition , we end up with a fair amount worse putting in the plp . now w would do we have a number , i suppose for the matched i do n't mean matched , but use of italian training in italian digits for plp only ? phd a: which is what we have also if use plp and msg together , eighty - nine point seven . professor d: ok , so even just plp , it is not , in the matched condition i wonder if it 's a difference between plp and mel cepstra , or whether it 's that the net half , for some reason , is not helping . phd a: , we have these results . it 's not do you have this result with plp alone , j fee feeding htk ? professor d: eighty - eight point six . , so adding msg , but that 's , that 's without the neural net , phd a: , that 's without the neural net and that 's the result that ogi has also with the mfcc with on - line normalization . phd a: eighty - two is the it 's the aurora baseline , so mfcc . then we can use , ogi , they use mfcc th the baseline mfcc plus on - line normalization professor d: , i ' m , i k i keep getting confused because this is accuracy . ok . alright . so this is i was thinking all this was worse . ok so this is all better professor d: because eighty - nine is bigger than eighty - two . i ' m all better now . ok , go ahead . phd a: so what happ what happens is that when we apply on - line normalization we jump to almost ninety percent . , when we apply a neural network , is the same . we j jump to ninety percent . phd a: whatever the normalization , actually . if we use n neural network , even if the features are not correctly normalized , we jump to ninety percent . professor d: so we go from eighty - si eighty - eight point six to ninety , . phd a: , ninety no , ninety it 's around eighty - nine , ninety , eighty - eight . professor d: for this case , right ? alright . so , so actually , the answer for experiments with one is that adding msg , if you does not help in that case . professor d: the other ones , we 'd have to look at it , but and the multi - english , does so if we think of this in error rates , we start off with , eighteen percent error rate , roughly . and we almost , cut that in half by putting in the on - line normalization and the neural net . and the msg does n't however particularly affect things . and we cut off , i about twenty - five percent of the error . no , not quite that , is it . , two point six out of eighteen . about , sixteen percent of the error , if we use multi - english instead of the matching condition . not matching condition , but , the , italian training . professor d: yes , good . ok ? so then you 're assuming multi - english is closer to the thing that you could use since you 're not gon na have matching , data for the new for the other languages and . , one qu thing is that , i asked you this before , but i wanna double check . when you say " me " in these other tests , that 's the multi - english , professor d: but it is not all of the multi - english , it is some piece of part of it . professor d: , so you used almost all you used two thirds of it , you think . so , it 's still it hurts you seems to hurt you a fair amount to add in this french and spanish . i wonder why . phd b: mmm , with the experiment type - two , i first i tried to combine , nnn , some feature from the mlp and other feature another feature . and we s we can first the feature are without delta and delta - delta , and we can see that in the situation , the msg - three , the same help nothing . and then i do the same but with the delta and delta - delta plp delta and delta - delta . and they all p but they all put off the mlp is it without delta and delta - delta . and we have a l little bit less result than the baseline plp with delta and delta - delta . maybe if when we have the new neural network trained with plp delta and delta - delta , maybe the final result must be better . i . phd a: actually , just to be some more do this number , this eighty - seven point one number , has to be compared with the professor d: so you have to compare it with the one over that you ' ve got in a box , which is that , the eighty - four point six . so phd a: , but in this case for the eighty - seven point one we used mlp outputs for the plp net and straight features with delta - delta . and straight features with delta - delta gives you what 's on the first sheet . phd a: so we use feature out , net outputs together with features . this is not perhaps not clear here but in this table , the first column is for mlp and the second for the features . professor d: . , i see . so you 're saying w so asking the question , " what what has adding the mlp done to improve over the , phd a: so , just so , actually it decreased the accuracy . because we have eighty - eight point six . and even the mlp alone what gives the mlp alone ? multi - english plp . no , it gives eighty - three point six . so we have our eighty - three point six and now eighty - eighty point six , professor d: eighty - s it was eighty , ok , eighty - three point six and eighty - eight point six . phd b: but i but maybe if we have the neural network trained with the plp delta and delta - delta , maybe tha this can help . professor d: , that 's one thing , but see the other thing is that , it 's good to take the difficult case , but let 's consider what that means . what what we 're saying is that one o one of the things that my interpretation of your s original suggestion is something like this , as motivation . when we train on data that is in one sense or another , similar to the testing data , then we get a win by having discriminant training . when we train on something that 's quite different , we have a potential to have some problems . and , if we get something that helps us when it 's somewhat similar , and does n't hurt us too much when it 's quite different , that 's maybe not so bad . so the question is , if you took the same combination , and you tried it out on , on say digits , professor d: , ok . , then does that , maybe with similar noise conditions and , does it then look much better ? and so what is the range over these different kinds of tests ? so , an anyway . phd b: and , with this type of configuration which i do on experiment using the new neural net with name broad klatt s twenty - seven , d i have found more or less the same result . professor d: and maybe if you use the , delta there , you would bring it up to where it was , at least about the same for a difficult case . phd a: it 's either less information from the neural network if we use only the silence output . phd b: because in this situation we have one hundred and three feature . and then w with the first configuration , i f i am found that work , does n't work , but is better , the second configuration . because i for the del engli - plp delta and delta - delta , here i have eighty - five point three accuracy , and with the second configuration i have eighty - seven point one . professor d: , there is a another , suggestion that would apply , to the second configuration , which , was made , by , hari . and that was that , if you have feed two streams into htk , and you , change the , variances if you scale the variances associated with , these streams , you can effectively scale the streams . so , , without changing the scripts for htk , which is the rule here , you can still change the variances which would effectively change the scale of these , two streams that come in . and , so , if you do that , it may be the case that , the mlp should not be considered as strongly , . and , so this is just setting them to be , excuse me , of equal weight . maybe it should n't be equal weight . professor d: right ? , i ' m to say that gives more experiments if we wanted to look at that , but , , on the other hand it 's just experiments at the level of the htk recognition . it 's not even the htk , professor d: , do you ? let me think . maybe you do n't . , you have to change the no , you can just do it in as once you ' ve done the training professor d: , the training is just coming up with the variances so i you could just scale them all . phd a: is it i th the htk models are diagonal covariances , so i d is it professor d: that 's , exactly the point , that if you change , change what they are it 's diagonal covariance matrices , but you say what those variances are . so , that , it 's diagonal , but the diagonal means th that then you 're gon na it 's gon na internally multiply it and , i it i m implicitly exponentiated to get probabilities , and so it 's gon na it 's going to affect the range of things if you change the variances of some of the features . professor d: so , i it 's precisely given that model you can very simply affect , the s the strength that you apply the features . that was that was , hari 's suggestion . so , so it could just be that h treating them equally , tea treating two streams equally is just not the right thing to do . it 's potentially opening a can of worms because , maybe it should be a different number for each test set , so i the other thing is to take if one were to take , , a couple of the most successful of these , phd a: so , the next point , we ' ve had some discussion with steve and shawn , about their , articulatory , so we 'll perhaps start something next week . , discussion with hynek , sunil and pratibha for trying to plug in their our networks with their within their block diagram , where to plug in the network , after the feature , before as a plugin or as a anoth another path , discussion about multi - band and traps , actually hynek would like to see , perhaps if you remember the block diagram there is , temporal lda followed b by a spectral lda for each critical band . and he would like to replace these by a network which would , make the system look like a trap . , it would be a trap system . , this is a trap system , but where the neural network are replaced by lda . , and about multi - band , i started multi - band mlp trainings , mmh actually , i w hhh prefer to do exactly what i did when i was in belgium . so i take exactly the same configurations , seven bands with nine frames of context , and we just train on timit , and on the large database , so , with spine and everything . mmm , i ' m starting to train also , networks with larger contexts . so , this would be something between traps and multi - band because we still have quite large bands , and but with a lot of context also . we still have to work on finnish , , to make a decision on which mlp can be the best across the different languages . for the moment it 's the timit network , and perhaps the network trained on everything . so . now we can test these two networks on with delta and large networks . , test them also on finnish phd a: and see which one is the best . , the next part of the document is , , a summary of what everything that has been done . so . we have seventy - nine m l ps trained on one , two , three , four , five , six , seven ten on ten different databases . the number of frames is bad also , so we have one million and a half for some , three million for other , and six million for the last one . ! as we mentioned , timit is the only that 's hand - labeled , and perhaps this is what makes the difference . , the other are just viterbi - aligned . so these seventy - nine mlp differ on different things . first , with respect to the on - line normalization , there are that use bad on - line normalization , and other good on - line normalization . with respect to the features , with respect to the use of delta or no , with respect to the hidden layer size and to the targets . , but we do n't have all the combination of these different parameters s what 's this ? we only have two hundred eighty six different tests and no not two thousand . phd a: , the observation is what we discussed already . the msg problem , the fact that the mlp trained on target task decreased the error rate . but when the m - mlp is trained on the is not trained on the target task , it increased the error rate compared to using straight features . except if the features are bad , actually except if the features are not correctly on - line normalized . in this case the tandem is still better even if it 's trained on not on the target digits . professor d: . so it sounds like , the net corrects some of the problems with some poor normalization . but if you can do good normalization it 's ok . phd a: , so the fourth point is , the timit plus noise seems to be the training set that gives better the best network . professor d: so - let me bef before you go on to the possible issues . so , on the msg problem , that in the , in the short time solution , that is , trying to figure out what we can proceed forward with to make the greatest progress , much as i said with jrasta , even though i really like jrasta and i really like msg , it 's in category that it 's , it may be complicated . and it might be if someone 's interested in it , certainly encourage anybody to look into it in the longer term , once we get out of this particular rush for results . but in the short term , unless you have some s strong idea of what 's wrong , phd a: i but i ' ve perhaps i have the feeling that it 's something that 's quite simple or just like nnn , no high - pass filter professor d: there 's supposed to msg is supposed to have a an on - line normalization though , professor d: , but also there 's an on - line norm besides the agc , there 's an on - line normalization that 's supposed to be taking out means and variances and . in fac the on - line normalization that we 're using came from the msg design , so it 's phd a: , but but this was the bad on - line normalization . actually . are your results are still with the bad phd b: ! , ! with " two " , with " on - line - two " . , professor d: so , i agree . it 's probably something simple , i if someone , , wants to play with it for a little bit . , you 're gon na do what you 're gon na do but my would be that it 's something that is a simple thing that could take a while to find . professor d: . and the other the results , observations two and three , is , that 's what we ' ve seen . that 's that what we were concerned about is that if it 's not on the target task if it 's on the target task then it helps to have the mlp transforming it . if it if it 's not on the target task , then , depending on how different it is , you can get , a reduction in performance . and the question is now how to get one and not the other ? or how to ameliorate the problems . , because it certainly does is to have in there , when it when there is something like the training data . phd a: . so , the reason is that the perhaps the target the task dependency the language dependency , and the noise dependency phd a: , the e but this is still not clear because , i do n't think we have enough result to talk about the language dependency . , the timit network is still the best but there is also an the other difference , the fact that it 's hand - labeled . professor d: hey ! , just you can just sit here . , i d i do n't think we want to mess with the microphones but it 's just , have a seat . s summary of the first , forty - five minutes is that some work and works , and some does n't professor d: , i we can do a little better than that but if you start off with the other one , actually , that has it in words and then th that has it the associated results . so you 're saying that , although from what we see , yes there 's what you would expect in terms of a language dependency and a noise dependency . that is , when the neural net is trained on one of those and tested on something different , we do n't do as in the target thing . but you 're saying that , it is although that general thing is observable so far , there 's something you 're not completely convinced about . and and what is that ? , you say " not clear yet " . what what do you mean ? phd a: , mmm , , that the fact that s , for ti - digits the timit net is the best , which is the english net . but the other are slightly worse . but you have two effects , the effect of changing language and the effect of training on something that 's viterbi - aligned instead of hand - labeled . so . professor d: do you think the alignments are bad ? , have you looked at the alignments ? what the viterbi alignment 's doing ? phd a: i do n't i . did - did you look at the spanish alignments carmen ? professor d: might be interesting to look at it . because , that is just looking but , it 's not clear to me you necessarily would do so badly from a viterbi alignment . it depends how good the recognizer is that 's that the engine is that 's doing the alignment . phd a: . but , perhaps it 's not really the alignment that 's bad but the just the ph phoneme string that 's used for the alignment phd a: french s , phoneme strings were corrected manually so we asked people to listen to the sentence and we gave the phoneme string and they correct them . but still , there might be errors just in the ph string of phonemes . , so this is not really the viterbi alignment , the third the third issue is the noise dependency perhaps but , this is not clear yet because all our nets are trained on the same noises and professor d: some of the nets were trained with spine and . so it and that has other noise . phd a: so . , from these results we have some questions with answers . what should be the network input ? , plp work as mfcc , . but it seems impor important to use the delta . , with respect to the network size , there 's one experiment that 's still running and we should have the result today , comparing network with five hundred and one thousand units . nnn , still no answer actually . phd a: , the training set , some answer . we can , we can tell which training set gives the best result , but we exactly why . , so . professor d: . right , the multi - english so far is the best . multi - multi - english just means " timit " , so that 's so . and and when you add other things in to broaden it , it gets worse typically . professor d: i like that . the training set is both questions , with answers and without answers . phd a: , training s right . so , the training targets actually , the two of the main issues perhaps are still the language dependency and the noise dependency . and perhaps to try to reduce the language dependency , we should focus on finding some other training targets . and labeling s labeling seems important , because of timit results . for moment you use we use phonetic targets but we could also use articulatory targets , soft targets , and perhaps even , use networks that does n't do classification but just regression so , train to have neural networks that , does a regression and , com compute features and noit not , nnn , features without noise . , transform the fea noisy features in other features that are not noisy . but continuous features . not , hard targets . professor d: , that seems like a good thing to do , probably , not again a short - term thing . one of the things about that is that it 's e u the ri i the major risk you have there of being is being dependent on very dependent on the noise and . phd a: so , this is w i wa this is one thing , this could be could help perhaps to reduce language dependency and for the noise part we could combine this with other approaches , like , the kleinschmidt approach . so the d the idea of putting all the noise that we can find inside a database . kleinschmidt was using more than fifty different noises to train his network , and so this is one approach and the other is multi - band , that is more robust to the noisy changes . so perhaps , something like multi - band trained on a lot of noises with , features - based targets could help . professor d: , if you i it 's interesting thought maybe if you just trained up w , one fantasy would be you have something like articulatory targets and you have some reasonable database , but then which is copied over many times with a range of different noises , and if cuz what you 're trying to do is come up with a core , reasonable feature set which is then gon na be used , by the hmm system . phd a: the future work is , try to connect to the to make to plug in the system to the ogi system . , there are still open questions there , where to put the mlp . professor d: and i , the real open question , e u there 's lots of open questions , but one of the core quote " open questions " for that is , if we take the , the best ones here , maybe not just the best one , but the best few you want the most promising group from these other experiments . , how do they do over a range of these different tests , not just the italian ? professor d: and y right ? and then see , again , how we know that there 's a mis there 's a loss in performance when the neural net is trained on conditions that are different than , we 're gon na test on , but , if you look over a range of these different tests , how do these different ways of combining the straight features with the mlp features , stand up over that range ? that 's that seems like the real question . and if that so if you just take plp with , the double - deltas . assume that 's the p the feature . look at these different ways of combining it . and , take let 's say , just take multi - english that works pretty for the training . and just look take that case and then look over all the different things . how does that compare between the professor d: all the different test sets , and for the couple different ways that you have of combining them . how do they stand up , over the phd a: and perhaps doing this for cha changing the variance of the streams and so on getting different scaling phd a: , so thi this sh would be more working on the mlp as an additional path instead of an insert to the to their diagram . cuz perhaps the insert idea is strange because nnn , they make lda and then we will again add a network does discriminate anal nnn , that discriminates , phd a: and because also perhaps we know that the when we have very good features the mlp does n't help . so . i . professor d: , the other thing , though , is that so . , we wanna get their path running here , if so , we can add this other . as an additional path phd a: , the way we want to do it perhaps is to just to get the vad labels and the final features . phd a: so they will send us the , provide us with the feature files , and with vad , binary labels so that we can , get our mlp features and filter them with the vad and then combine them with their f feature stream . professor d: so we so . first thing we 'd wanna do there is to make that when we get those labels of final features is that we get the same results as them . without putting in a second path . professor d: just th w i just to make that we have we understand properly what things are , our very first thing to do is to double check that we get the exact same results as them on htk . professor d: , i that we need to r do we need to retrain we can just take the re their training files also . but , just for the testing , jus just make that we get the same results so we can duplicate it before we add in another cuz otherwise , we wo n't things mean . phd a: and . , so fff , lograsta , i if we want to we can try networks with lograsta filtered features . professor d: ! , the other thing is when you say comb i ' m , i ' m interrupting . that u , when you 're talking about combining multiple features , suppose we said , " ok , we ' ve got these different features and , but plp seems pretty good . " if we take the approach that mike did and have , one of the situations we have is we have these different conditions . we have different languages , we have different noises , if we have some drastically different conditions and we just train up different m l ps with them . and put them together . what what what mike found , for the reverberation case at least , who knows if it 'll work for these other ones . that you did have interpolative effects . that is , that yes , if you knew what the reverberation condition was gon na be and you trained for that , then you got the best results . but if you had , say , a heavily - reverberation ca heavy - reverberation case and a no - reverberation case , and then you fed the thing , something that was a modest amount of reverberation then you 'd get some result in between the two . so it was behaved reasonably . is tha that a fair professor d: , see , i oc you were doing some something that was so maybe the analogy is n't quite right . you were doing something that was in way a little better behaved . you had reverb for a single variable which was re , reverberation . here the problem seems to be is that we do n't have a hug a really huge net with a really huge amount of training data . but we have s f for this task , i would think , a modest amount . , a million frames actually is n't that much . we have a modest amount of training data from a couple different conditions , and then in , that and the real situation is that there 's enormous variability that we anticipate in the test set in terms of language , and noise type , and , channel characteristic , all over the map . a bunch of different dimensions . and so , i ' m just concerned that we do n't really have , the data to train up one of the things that we were seeing is that when we added in we still do n't have a good explanation for this , but we are seeing that we 're adding in , a fe few different databases and the performance is getting worse and , when we just take one of those databases that 's a pretty good one , it actually is better . and that says to me , yes , that , there might be some problems with the pronunciation models that some of the databases we 're adding in like that . but one way or another we do n't have , seemingly , the ability to represent , in the neural net of the size that we have , all of the variability that we 're gon na be covering . so that i ' m hoping that , this is another take on the efficiency argument you 're making , which is i ' m hoping that with moderate size neural nets , that if we if they look at more constrained conditions they 'll have enough parameters to really represent them . professor d: i e the it 's true that the ogi folk found that using lda rasta , which is a lograsta , it 's just that they have the it 's done in the log domain , as i recall , and it 's just that they d it 's trained up , that that benefitted from on - line normalization . so they did at least in their case , it did seem to be somewhat complimentary . so will it be in our case , where we 're using the neural net ? they were not using the neural net . i . ok , so the other things you have here are , trying to improve results from a single make better . and cpu memory issues . we ' ve been ignoring that , have n't we ? professor d: , but i li , my impression you you folks have been looking at this more than me . but my impression was that , there was a strict constraint on the delay , but beyond that it was that using less memory was better , and using less cpu was better . something like that , phd a: so , but we ' ve we have to get some reference point to where we , what 's a reasonable number ? perhaps be because if it 's too large or @ professor d: , i do n't think we 're completely off the wall . that if we have , the ultimate fall back that we could do if we find we may find that we 're not really gon na worry about the m l , if the mlp ultimately , after all is said and done , does n't really help then we wo n't have it in . if the mlp does , we find , help us enough in some conditions , we might even have more than one mlp . we could simply say that is , done on the , server . and it 's we do the other manipulations that we 're doing before that . so , i that 's ok . so the key thing was , this plug into ogi . what are they what are they gon na be working do we they 're gon na be working on while we take their features , phd a: this that was pratibha . sunil , what was he doing , do you remember ? phd b: i do n't re i did n't remember . maybe he 's working with neural network . phd a: they were also mainly , working a little bit of new things , like networks and multi - band , but mainly trying to tune their system as it is now to just trying to get the best from this architecture . professor d: so i the way it would work is that you 'd get there 'd be some point where you say , " ok , this is their version - one " or whatever , and we get these vad labels and features and for all these test sets from them , and then , that 's what we work with . we have a certain level we try to improve it with this other path and then , when it gets to be , january some point , we say , " ok we have shown that we can improve this , in this way . so now what 's your newest version ? " and then maybe they 'll have something that 's better and then we 'd combine it . this is always hard . i used to work with folks who were trying to improve a good , hmm system with a neural net system and , it was a common problem that you 'd , and this actually , this is true not just for neural nets but just for in general if people were working with , rescoring , n - best lists or lattices that come came from , a mainstream recognizer . , you get something from the other site at one point and you work really hard on making it better with rescoring . but they 're working really hard , too . so by the time you have , improved their score , they have also improved their score and now there is n't any difference , because the other so , i at some point we 'll have to professor d: we 're integrated a little more tightly than happens in a lot of those cases . at the moment they say that they have a better thing we can we e what takes all the time here is that th we 're trying so many things , presumably , in a day we could turn around , taking a new set of things from them and rescoring it , professor d: no , this is good . that the most wide open thing is the issues about the , different trainings . , da training targets and noises and . phd a: so we can for we c we can forget combining multiple features and mlg perhaps , professor d: , for right now , i th i really liked msg . and that , one of the things i liked about it is has such different temporal properties . and , that there is ultimately a really good , potential for , bringing in things with different temporal properties . , but , we only have limited time and there 's a lot of other things we have to look at . and it seems like much more core questions are issues about the training set and the training targets , and fitting in what we 're doing with what they 're doing , and , with limited time . we have to start cutting down . so , and then , once we , having gone through this process and trying many different things , i would imagine that certain things , come up that you are curious about , that you 'd not getting to and so when the dust settles from the evaluation , that would time to go back and take whatever intrigued you most , got you most interested and work with it , for the next round . , as you can tell from these numbers , nothing that any of us is gon na do is actually gon na completely solve the problem . so , there 'll still be plenty to do . barry , you ' ve been pretty quiet . grad c: , helping out , preparing , they ' ve been running all the experiments and i ' ve been , w doing some work on the preparing all the data for them to , train and to test on . right now , i ' m focusing mainly on this final project i ' m working on in jordan 's class . grad c: , i ' m trying to so there was a paper in icslp about this multi - band , belief - net structure . this guy did it was two h m ms with a dependency arrow between the two h m and so i wanna try coupling them instead of t having an arrow that flows from one sub - band to another sub - band . i wanna try having the arrows go both ways . and , i ' m just gon na see if that better models , asynchrony in any way or . professor d: , that sounds interesting . anything to you wanted to no . silent partner in the meeting . , we got a laugh out of him , that 's good . ok , everyone h must contribute to the our sound files here . ok , so speaking of which , if we do n't have anything else that we need you happy with where we are ? know know wher know where we 're going ? professor d: , . you you happy ? you 're happy . ok everyone should be happy . you do n't have to be happy . you 're almost done . grad e: al - actually i should mention so if , about the linux machine swede . so it looks like the , neural net tools are installed there . and dan ellis i believe knows something about using that machine so if people are interested in getting jobs running on that maybe i could help with that . phd a: , but i if we really need now a lot of machines . we could start computing another huge table , we professor d: . , we want a different table , at least there 's some different things that we 're trying to get at now . so . , as far as you can tell , you 're actually ok on c - on cpu , for training and so on ? phd a: . so . , more is always better , but mmm , i do n't think we have to train a lot of networks , now that we know we just select what works fine and try to improve this professor d: , i ' m familiar with that one , alright , so , since , we did n't ha get a channel on for you , you do n't have to read any digits but the rest of us will . , is it on ? . we did n't i wo n't touch anything cuz i ' m afraid of making the driver crash which it seems to do , pretty easily . ok , so we 'll i 'll start off the connect the professor d: , let 's hope it works . maybe you should go first and see so that you 're ok . professor d: why do n't you go next then . we 're done . ok , so . just finished digits . , so . , it 's good . i we can turn off our microphones now .
the main topic for discussion by the berkeley meeting recorder group was progress on the experiments run as part of the groups main project , a speech recogniser for the cellular industry. this included reporting the results , and making conclusions to shape future work. also discussed were the details of the continued collaboration with project partner ogi. further investigation into the lack of difference using msg features makes should not be made while they are on their current short time scale for results. same goes for anything else that comes up and looks interesting , leave it for just now. really should pick which results are looking the best at this stage , and take only them further. someone should look closely at the non-timit databases , their viterbi alignments , and their phoneme strings to see is that is why timit is better. need to get ogi's system from them , and get it running like they do , before integrating into it. it is unclear if the english timit database is providing the best results because english is the best language or timit is the most accurately labelled dataset because it was hand labelled. the results table was very large and difficult to follow; it was unclear which of the numbers were error or accuracy rate , and straight rates or percentages of the baseline. there is very limited training data , over only a few conditions. test and real data is likely to encompass much more variability. speakers mn007 and fn002 have made further progress into the series of experiments they have been running in previous weeks; results were varied. the main conclusions include that training on task data is good , and the best broad training data is the english timit database. other results show that msg makes little difference , adding mlp improves when trained on task data , decreases figures when not , while using delta generally improves the situation , as does on-line normalization. starting work with a new broad database drawn from english , french , timit , spine and english and italian digits. mn007 has also started work on multi-band mlp trainings , with large context. ogi have a block diagram explaining their system , and the group are trying to fit their work into it. speaker me006 has been helping prepare data , but is mainly doing work for a class he takes , looking at modelling asynchrony.
###dialogue: professor d: we 're on ? yes , . , we 're testing noise robustness but let 's not get silly . ok , so , you ' ve got some , xerox things to pass out ? phd a: , i ' m for the table , but as it grows in size , it . professor d: when you get older you have these different perspectives . , lowering the word hour rate is fine , but having big font ! phd a: and also since the we ' ve started to run work on this . so since last week we ' ve started to fill the column with features w with nets trained on plp with on - line normalization but with delta also , because the column was not completely phd a: , it 's still not completely filled , but we have more results to compare with network using without plp and finally , hhh , ehhh pl - delta seems very important . i . if you take , let 's say , anyway aurora - two - b , so , the next t the second , part of the table , phd a: when we use the large training set using french , spanish , and english , you have one hundred and six without delta and eighty - nine with the delta . professor d: a all of these numbers are with a hundred percent being , the baseline performance , phd a: . so now we see that the gap between the different training set is much smaller phd a: but , actually , for english training on timit is still better than the other languages . and mmm , . and f also for italian , actually . if you take the second set of experiment for italian , so , the mismatched condition , when we use the training on timit so , it 's multi - english , we have a ninety - one number , and training with other languages is a little bit worse . phd a: and , and here the gap is still more important between using delta and not using delta . if y if i take the training s the large training set , it 's we have one hundred and seventy - two , and one hundred and four when we use delta . even if the contexts used is quite the same , because without delta we use seventeenths seventeen frames . , so the second point is that we have no single cross - language experiments , that we did not have last week . , so this is training the net on french only , or on english only , and testing on italian . and training the net on french only and spanish only and testing on , ti - digits . and , fff , what we see is that these nets are not as good , except for the multi - english , which is always one of the best . then we started to work on a large dat database containing , sentences from the french , from the spanish , from the timit , from spine , from english digits , and from italian digits . so this is the another line another set of lines in the table . , @ with spine and , actually we did this before knowing the result of all the data , so we have to redo the experiment training the net with , plp , but with delta . but this net performed quite . it 's better than the net using french , spanish , and english only . we have also started feature combination experiments . many experiments using features and net outputs together . and this is the results are on the other document . , we can discuss this after , perhaps , just , @ . , so there are four systems . the first one , is combining , two feature streams , using and each feature stream has its own mpl . so it 's the similar to the tandem that was proposed for the first . the multi - stream tandem for the first proposal . the second is using features and klt transformed mlp outputs . and the third one is to u use a single klt trans transform features as mlp outputs . mmm . you can comment these results , phd b: yes , s i would like to say that , , mmm , if we does n't use the delta - delta , we have an improve when we use s some combination . but when phd b: and first in the experiment - one i do i use different mlp , and is that the multi - english mlp is the better . for the ne rest of experiment i use multi - english , only multi - english . and i try to combine different type of feature , but the result is that the msg - three feature does n't work for the italian database because never help to increase the accuracy . phd a: , actually , if w we look at the table , the huge table , we see that for ti - digits msg perform as the plp , but this is not the case for italian what where the error rate is c is almost twice the error rate of plp . so , , i do n't think this is a bug but this is something in probably in the msg process that i what exactly . perhaps the fact that the there 's no low - pass filter , or no pre - emp pre - emphasis filter and that there is some dc offset in the italian , or , something simple like that . but that we need to sort out if want to get improvement by combining plp and msg because for the moment msg do does n't bring much information . and as carmen said , if we combine the two , we have the result , of plp . professor d: i , the , baseline system when you said the baseline system was , eighty - two percent , that was trained on what and tested on what ? that was , italian mismatched d , digits , is the testing , and the training is italian digits ? so the " mismatch " just refers to the noise and , microphone and , right ? so , did we have so would that then correspond to the first line here of where the training is the italian digits ? professor d: training of the net , so what that says is that in a matched condition , we end up with a fair amount worse putting in the plp . now w would do we have a number , i suppose for the matched i do n't mean matched , but use of italian training in italian digits for plp only ? phd a: which is what we have also if use plp and msg together , eighty - nine point seven . professor d: ok , so even just plp , it is not , in the matched condition i wonder if it 's a difference between plp and mel cepstra , or whether it 's that the net half , for some reason , is not helping . phd a: , we have these results . it 's not do you have this result with plp alone , j fee feeding htk ? professor d: eighty - eight point six . , so adding msg , but that 's , that 's without the neural net , phd a: , that 's without the neural net and that 's the result that ogi has also with the mfcc with on - line normalization . phd a: eighty - two is the it 's the aurora baseline , so mfcc . then we can use , ogi , they use mfcc th the baseline mfcc plus on - line normalization professor d: , i ' m , i k i keep getting confused because this is accuracy . ok . alright . so this is i was thinking all this was worse . ok so this is all better professor d: because eighty - nine is bigger than eighty - two . i ' m all better now . ok , go ahead . phd a: so what happ what happens is that when we apply on - line normalization we jump to almost ninety percent . , when we apply a neural network , is the same . we j jump to ninety percent . phd a: whatever the normalization , actually . if we use n neural network , even if the features are not correctly normalized , we jump to ninety percent . professor d: so we go from eighty - si eighty - eight point six to ninety , . phd a: , ninety no , ninety it 's around eighty - nine , ninety , eighty - eight . professor d: for this case , right ? alright . so , so actually , the answer for experiments with one is that adding msg , if you does not help in that case . professor d: the other ones , we 'd have to look at it , but and the multi - english , does so if we think of this in error rates , we start off with , eighteen percent error rate , roughly . and we almost , cut that in half by putting in the on - line normalization and the neural net . and the msg does n't however particularly affect things . and we cut off , i about twenty - five percent of the error . no , not quite that , is it . , two point six out of eighteen . about , sixteen percent of the error , if we use multi - english instead of the matching condition . not matching condition , but , the , italian training . professor d: yes , good . ok ? so then you 're assuming multi - english is closer to the thing that you could use since you 're not gon na have matching , data for the new for the other languages and . , one qu thing is that , i asked you this before , but i wanna double check . when you say " me " in these other tests , that 's the multi - english , professor d: but it is not all of the multi - english , it is some piece of part of it . professor d: , so you used almost all you used two thirds of it , you think . so , it 's still it hurts you seems to hurt you a fair amount to add in this french and spanish . i wonder why . phd b: mmm , with the experiment type - two , i first i tried to combine , nnn , some feature from the mlp and other feature another feature . and we s we can first the feature are without delta and delta - delta , and we can see that in the situation , the msg - three , the same help nothing . and then i do the same but with the delta and delta - delta plp delta and delta - delta . and they all p but they all put off the mlp is it without delta and delta - delta . and we have a l little bit less result than the baseline plp with delta and delta - delta . maybe if when we have the new neural network trained with plp delta and delta - delta , maybe the final result must be better . i . phd a: actually , just to be some more do this number , this eighty - seven point one number , has to be compared with the professor d: so you have to compare it with the one over that you ' ve got in a box , which is that , the eighty - four point six . so phd a: , but in this case for the eighty - seven point one we used mlp outputs for the plp net and straight features with delta - delta . and straight features with delta - delta gives you what 's on the first sheet . phd a: so we use feature out , net outputs together with features . this is not perhaps not clear here but in this table , the first column is for mlp and the second for the features . professor d: . , i see . so you 're saying w so asking the question , " what what has adding the mlp done to improve over the , phd a: so , just so , actually it decreased the accuracy . because we have eighty - eight point six . and even the mlp alone what gives the mlp alone ? multi - english plp . no , it gives eighty - three point six . so we have our eighty - three point six and now eighty - eighty point six , professor d: eighty - s it was eighty , ok , eighty - three point six and eighty - eight point six . phd b: but i but maybe if we have the neural network trained with the plp delta and delta - delta , maybe tha this can help . professor d: , that 's one thing , but see the other thing is that , it 's good to take the difficult case , but let 's consider what that means . what what we 're saying is that one o one of the things that my interpretation of your s original suggestion is something like this , as motivation . when we train on data that is in one sense or another , similar to the testing data , then we get a win by having discriminant training . when we train on something that 's quite different , we have a potential to have some problems . and , if we get something that helps us when it 's somewhat similar , and does n't hurt us too much when it 's quite different , that 's maybe not so bad . so the question is , if you took the same combination , and you tried it out on , on say digits , professor d: , ok . , then does that , maybe with similar noise conditions and , does it then look much better ? and so what is the range over these different kinds of tests ? so , an anyway . phd b: and , with this type of configuration which i do on experiment using the new neural net with name broad klatt s twenty - seven , d i have found more or less the same result . professor d: and maybe if you use the , delta there , you would bring it up to where it was , at least about the same for a difficult case . phd a: it 's either less information from the neural network if we use only the silence output . phd b: because in this situation we have one hundred and three feature . and then w with the first configuration , i f i am found that work , does n't work , but is better , the second configuration . because i for the del engli - plp delta and delta - delta , here i have eighty - five point three accuracy , and with the second configuration i have eighty - seven point one . professor d: , there is a another , suggestion that would apply , to the second configuration , which , was made , by , hari . and that was that , if you have feed two streams into htk , and you , change the , variances if you scale the variances associated with , these streams , you can effectively scale the streams . so , , without changing the scripts for htk , which is the rule here , you can still change the variances which would effectively change the scale of these , two streams that come in . and , so , if you do that , it may be the case that , the mlp should not be considered as strongly , . and , so this is just setting them to be , excuse me , of equal weight . maybe it should n't be equal weight . professor d: right ? , i ' m to say that gives more experiments if we wanted to look at that , but , , on the other hand it 's just experiments at the level of the htk recognition . it 's not even the htk , professor d: , do you ? let me think . maybe you do n't . , you have to change the no , you can just do it in as once you ' ve done the training professor d: , the training is just coming up with the variances so i you could just scale them all . phd a: is it i th the htk models are diagonal covariances , so i d is it professor d: that 's , exactly the point , that if you change , change what they are it 's diagonal covariance matrices , but you say what those variances are . so , that , it 's diagonal , but the diagonal means th that then you 're gon na it 's gon na internally multiply it and , i it i m implicitly exponentiated to get probabilities , and so it 's gon na it 's going to affect the range of things if you change the variances of some of the features . professor d: so , i it 's precisely given that model you can very simply affect , the s the strength that you apply the features . that was that was , hari 's suggestion . so , so it could just be that h treating them equally , tea treating two streams equally is just not the right thing to do . it 's potentially opening a can of worms because , maybe it should be a different number for each test set , so i the other thing is to take if one were to take , , a couple of the most successful of these , phd a: so , the next point , we ' ve had some discussion with steve and shawn , about their , articulatory , so we 'll perhaps start something next week . , discussion with hynek , sunil and pratibha for trying to plug in their our networks with their within their block diagram , where to plug in the network , after the feature , before as a plugin or as a anoth another path , discussion about multi - band and traps , actually hynek would like to see , perhaps if you remember the block diagram there is , temporal lda followed b by a spectral lda for each critical band . and he would like to replace these by a network which would , make the system look like a trap . , it would be a trap system . , this is a trap system , but where the neural network are replaced by lda . , and about multi - band , i started multi - band mlp trainings , mmh actually , i w hhh prefer to do exactly what i did when i was in belgium . so i take exactly the same configurations , seven bands with nine frames of context , and we just train on timit , and on the large database , so , with spine and everything . mmm , i ' m starting to train also , networks with larger contexts . so , this would be something between traps and multi - band because we still have quite large bands , and but with a lot of context also . we still have to work on finnish , , to make a decision on which mlp can be the best across the different languages . for the moment it 's the timit network , and perhaps the network trained on everything . so . now we can test these two networks on with delta and large networks . , test them also on finnish phd a: and see which one is the best . , the next part of the document is , , a summary of what everything that has been done . so . we have seventy - nine m l ps trained on one , two , three , four , five , six , seven ten on ten different databases . the number of frames is bad also , so we have one million and a half for some , three million for other , and six million for the last one . ! as we mentioned , timit is the only that 's hand - labeled , and perhaps this is what makes the difference . , the other are just viterbi - aligned . so these seventy - nine mlp differ on different things . first , with respect to the on - line normalization , there are that use bad on - line normalization , and other good on - line normalization . with respect to the features , with respect to the use of delta or no , with respect to the hidden layer size and to the targets . , but we do n't have all the combination of these different parameters s what 's this ? we only have two hundred eighty six different tests and no not two thousand . phd a: , the observation is what we discussed already . the msg problem , the fact that the mlp trained on target task decreased the error rate . but when the m - mlp is trained on the is not trained on the target task , it increased the error rate compared to using straight features . except if the features are bad , actually except if the features are not correctly on - line normalized . in this case the tandem is still better even if it 's trained on not on the target digits . professor d: . so it sounds like , the net corrects some of the problems with some poor normalization . but if you can do good normalization it 's ok . phd a: , so the fourth point is , the timit plus noise seems to be the training set that gives better the best network . professor d: so - let me bef before you go on to the possible issues . so , on the msg problem , that in the , in the short time solution , that is , trying to figure out what we can proceed forward with to make the greatest progress , much as i said with jrasta , even though i really like jrasta and i really like msg , it 's in category that it 's , it may be complicated . and it might be if someone 's interested in it , certainly encourage anybody to look into it in the longer term , once we get out of this particular rush for results . but in the short term , unless you have some s strong idea of what 's wrong , phd a: i but i ' ve perhaps i have the feeling that it 's something that 's quite simple or just like nnn , no high - pass filter professor d: there 's supposed to msg is supposed to have a an on - line normalization though , professor d: , but also there 's an on - line norm besides the agc , there 's an on - line normalization that 's supposed to be taking out means and variances and . in fac the on - line normalization that we 're using came from the msg design , so it 's phd a: , but but this was the bad on - line normalization . actually . are your results are still with the bad phd b: ! , ! with " two " , with " on - line - two " . , professor d: so , i agree . it 's probably something simple , i if someone , , wants to play with it for a little bit . , you 're gon na do what you 're gon na do but my would be that it 's something that is a simple thing that could take a while to find . professor d: . and the other the results , observations two and three , is , that 's what we ' ve seen . that 's that what we were concerned about is that if it 's not on the target task if it 's on the target task then it helps to have the mlp transforming it . if it if it 's not on the target task , then , depending on how different it is , you can get , a reduction in performance . and the question is now how to get one and not the other ? or how to ameliorate the problems . , because it certainly does is to have in there , when it when there is something like the training data . phd a: . so , the reason is that the perhaps the target the task dependency the language dependency , and the noise dependency phd a: , the e but this is still not clear because , i do n't think we have enough result to talk about the language dependency . , the timit network is still the best but there is also an the other difference , the fact that it 's hand - labeled . professor d: hey ! , just you can just sit here . , i d i do n't think we want to mess with the microphones but it 's just , have a seat . s summary of the first , forty - five minutes is that some work and works , and some does n't professor d: , i we can do a little better than that but if you start off with the other one , actually , that has it in words and then th that has it the associated results . so you 're saying that , although from what we see , yes there 's what you would expect in terms of a language dependency and a noise dependency . that is , when the neural net is trained on one of those and tested on something different , we do n't do as in the target thing . but you 're saying that , it is although that general thing is observable so far , there 's something you 're not completely convinced about . and and what is that ? , you say " not clear yet " . what what do you mean ? phd a: , mmm , , that the fact that s , for ti - digits the timit net is the best , which is the english net . but the other are slightly worse . but you have two effects , the effect of changing language and the effect of training on something that 's viterbi - aligned instead of hand - labeled . so . professor d: do you think the alignments are bad ? , have you looked at the alignments ? what the viterbi alignment 's doing ? phd a: i do n't i . did - did you look at the spanish alignments carmen ? professor d: might be interesting to look at it . because , that is just looking but , it 's not clear to me you necessarily would do so badly from a viterbi alignment . it depends how good the recognizer is that 's that the engine is that 's doing the alignment . phd a: . but , perhaps it 's not really the alignment that 's bad but the just the ph phoneme string that 's used for the alignment phd a: french s , phoneme strings were corrected manually so we asked people to listen to the sentence and we gave the phoneme string and they correct them . but still , there might be errors just in the ph string of phonemes . , so this is not really the viterbi alignment , the third the third issue is the noise dependency perhaps but , this is not clear yet because all our nets are trained on the same noises and professor d: some of the nets were trained with spine and . so it and that has other noise . phd a: so . , from these results we have some questions with answers . what should be the network input ? , plp work as mfcc , . but it seems impor important to use the delta . , with respect to the network size , there 's one experiment that 's still running and we should have the result today , comparing network with five hundred and one thousand units . nnn , still no answer actually . phd a: , the training set , some answer . we can , we can tell which training set gives the best result , but we exactly why . , so . professor d: . right , the multi - english so far is the best . multi - multi - english just means " timit " , so that 's so . and and when you add other things in to broaden it , it gets worse typically . professor d: i like that . the training set is both questions , with answers and without answers . phd a: , training s right . so , the training targets actually , the two of the main issues perhaps are still the language dependency and the noise dependency . and perhaps to try to reduce the language dependency , we should focus on finding some other training targets . and labeling s labeling seems important , because of timit results . for moment you use we use phonetic targets but we could also use articulatory targets , soft targets , and perhaps even , use networks that does n't do classification but just regression so , train to have neural networks that , does a regression and , com compute features and noit not , nnn , features without noise . , transform the fea noisy features in other features that are not noisy . but continuous features . not , hard targets . professor d: , that seems like a good thing to do , probably , not again a short - term thing . one of the things about that is that it 's e u the ri i the major risk you have there of being is being dependent on very dependent on the noise and . phd a: so , this is w i wa this is one thing , this could be could help perhaps to reduce language dependency and for the noise part we could combine this with other approaches , like , the kleinschmidt approach . so the d the idea of putting all the noise that we can find inside a database . kleinschmidt was using more than fifty different noises to train his network , and so this is one approach and the other is multi - band , that is more robust to the noisy changes . so perhaps , something like multi - band trained on a lot of noises with , features - based targets could help . professor d: , if you i it 's interesting thought maybe if you just trained up w , one fantasy would be you have something like articulatory targets and you have some reasonable database , but then which is copied over many times with a range of different noises , and if cuz what you 're trying to do is come up with a core , reasonable feature set which is then gon na be used , by the hmm system . phd a: the future work is , try to connect to the to make to plug in the system to the ogi system . , there are still open questions there , where to put the mlp . professor d: and i , the real open question , e u there 's lots of open questions , but one of the core quote " open questions " for that is , if we take the , the best ones here , maybe not just the best one , but the best few you want the most promising group from these other experiments . , how do they do over a range of these different tests , not just the italian ? professor d: and y right ? and then see , again , how we know that there 's a mis there 's a loss in performance when the neural net is trained on conditions that are different than , we 're gon na test on , but , if you look over a range of these different tests , how do these different ways of combining the straight features with the mlp features , stand up over that range ? that 's that seems like the real question . and if that so if you just take plp with , the double - deltas . assume that 's the p the feature . look at these different ways of combining it . and , take let 's say , just take multi - english that works pretty for the training . and just look take that case and then look over all the different things . how does that compare between the professor d: all the different test sets , and for the couple different ways that you have of combining them . how do they stand up , over the phd a: and perhaps doing this for cha changing the variance of the streams and so on getting different scaling phd a: , so thi this sh would be more working on the mlp as an additional path instead of an insert to the to their diagram . cuz perhaps the insert idea is strange because nnn , they make lda and then we will again add a network does discriminate anal nnn , that discriminates , phd a: and because also perhaps we know that the when we have very good features the mlp does n't help . so . i . professor d: , the other thing , though , is that so . , we wanna get their path running here , if so , we can add this other . as an additional path phd a: , the way we want to do it perhaps is to just to get the vad labels and the final features . phd a: so they will send us the , provide us with the feature files , and with vad , binary labels so that we can , get our mlp features and filter them with the vad and then combine them with their f feature stream . professor d: so we so . first thing we 'd wanna do there is to make that when we get those labels of final features is that we get the same results as them . without putting in a second path . professor d: just th w i just to make that we have we understand properly what things are , our very first thing to do is to double check that we get the exact same results as them on htk . professor d: , i that we need to r do we need to retrain we can just take the re their training files also . but , just for the testing , jus just make that we get the same results so we can duplicate it before we add in another cuz otherwise , we wo n't things mean . phd a: and . , so fff , lograsta , i if we want to we can try networks with lograsta filtered features . professor d: ! , the other thing is when you say comb i ' m , i ' m interrupting . that u , when you 're talking about combining multiple features , suppose we said , " ok , we ' ve got these different features and , but plp seems pretty good . " if we take the approach that mike did and have , one of the situations we have is we have these different conditions . we have different languages , we have different noises , if we have some drastically different conditions and we just train up different m l ps with them . and put them together . what what what mike found , for the reverberation case at least , who knows if it 'll work for these other ones . that you did have interpolative effects . that is , that yes , if you knew what the reverberation condition was gon na be and you trained for that , then you got the best results . but if you had , say , a heavily - reverberation ca heavy - reverberation case and a no - reverberation case , and then you fed the thing , something that was a modest amount of reverberation then you 'd get some result in between the two . so it was behaved reasonably . is tha that a fair professor d: , see , i oc you were doing some something that was so maybe the analogy is n't quite right . you were doing something that was in way a little better behaved . you had reverb for a single variable which was re , reverberation . here the problem seems to be is that we do n't have a hug a really huge net with a really huge amount of training data . but we have s f for this task , i would think , a modest amount . , a million frames actually is n't that much . we have a modest amount of training data from a couple different conditions , and then in , that and the real situation is that there 's enormous variability that we anticipate in the test set in terms of language , and noise type , and , channel characteristic , all over the map . a bunch of different dimensions . and so , i ' m just concerned that we do n't really have , the data to train up one of the things that we were seeing is that when we added in we still do n't have a good explanation for this , but we are seeing that we 're adding in , a fe few different databases and the performance is getting worse and , when we just take one of those databases that 's a pretty good one , it actually is better . and that says to me , yes , that , there might be some problems with the pronunciation models that some of the databases we 're adding in like that . but one way or another we do n't have , seemingly , the ability to represent , in the neural net of the size that we have , all of the variability that we 're gon na be covering . so that i ' m hoping that , this is another take on the efficiency argument you 're making , which is i ' m hoping that with moderate size neural nets , that if we if they look at more constrained conditions they 'll have enough parameters to really represent them . professor d: i e the it 's true that the ogi folk found that using lda rasta , which is a lograsta , it 's just that they have the it 's done in the log domain , as i recall , and it 's just that they d it 's trained up , that that benefitted from on - line normalization . so they did at least in their case , it did seem to be somewhat complimentary . so will it be in our case , where we 're using the neural net ? they were not using the neural net . i . ok , so the other things you have here are , trying to improve results from a single make better . and cpu memory issues . we ' ve been ignoring that , have n't we ? professor d: , but i li , my impression you you folks have been looking at this more than me . but my impression was that , there was a strict constraint on the delay , but beyond that it was that using less memory was better , and using less cpu was better . something like that , phd a: so , but we ' ve we have to get some reference point to where we , what 's a reasonable number ? perhaps be because if it 's too large or @ professor d: , i do n't think we 're completely off the wall . that if we have , the ultimate fall back that we could do if we find we may find that we 're not really gon na worry about the m l , if the mlp ultimately , after all is said and done , does n't really help then we wo n't have it in . if the mlp does , we find , help us enough in some conditions , we might even have more than one mlp . we could simply say that is , done on the , server . and it 's we do the other manipulations that we 're doing before that . so , i that 's ok . so the key thing was , this plug into ogi . what are they what are they gon na be working do we they 're gon na be working on while we take their features , phd a: this that was pratibha . sunil , what was he doing , do you remember ? phd b: i do n't re i did n't remember . maybe he 's working with neural network . phd a: they were also mainly , working a little bit of new things , like networks and multi - band , but mainly trying to tune their system as it is now to just trying to get the best from this architecture . professor d: so i the way it would work is that you 'd get there 'd be some point where you say , " ok , this is their version - one " or whatever , and we get these vad labels and features and for all these test sets from them , and then , that 's what we work with . we have a certain level we try to improve it with this other path and then , when it gets to be , january some point , we say , " ok we have shown that we can improve this , in this way . so now what 's your newest version ? " and then maybe they 'll have something that 's better and then we 'd combine it . this is always hard . i used to work with folks who were trying to improve a good , hmm system with a neural net system and , it was a common problem that you 'd , and this actually , this is true not just for neural nets but just for in general if people were working with , rescoring , n - best lists or lattices that come came from , a mainstream recognizer . , you get something from the other site at one point and you work really hard on making it better with rescoring . but they 're working really hard , too . so by the time you have , improved their score , they have also improved their score and now there is n't any difference , because the other so , i at some point we 'll have to professor d: we 're integrated a little more tightly than happens in a lot of those cases . at the moment they say that they have a better thing we can we e what takes all the time here is that th we 're trying so many things , presumably , in a day we could turn around , taking a new set of things from them and rescoring it , professor d: no , this is good . that the most wide open thing is the issues about the , different trainings . , da training targets and noises and . phd a: so we can for we c we can forget combining multiple features and mlg perhaps , professor d: , for right now , i th i really liked msg . and that , one of the things i liked about it is has such different temporal properties . and , that there is ultimately a really good , potential for , bringing in things with different temporal properties . , but , we only have limited time and there 's a lot of other things we have to look at . and it seems like much more core questions are issues about the training set and the training targets , and fitting in what we 're doing with what they 're doing , and , with limited time . we have to start cutting down . so , and then , once we , having gone through this process and trying many different things , i would imagine that certain things , come up that you are curious about , that you 'd not getting to and so when the dust settles from the evaluation , that would time to go back and take whatever intrigued you most , got you most interested and work with it , for the next round . , as you can tell from these numbers , nothing that any of us is gon na do is actually gon na completely solve the problem . so , there 'll still be plenty to do . barry , you ' ve been pretty quiet . grad c: , helping out , preparing , they ' ve been running all the experiments and i ' ve been , w doing some work on the preparing all the data for them to , train and to test on . right now , i ' m focusing mainly on this final project i ' m working on in jordan 's class . grad c: , i ' m trying to so there was a paper in icslp about this multi - band , belief - net structure . this guy did it was two h m ms with a dependency arrow between the two h m and so i wanna try coupling them instead of t having an arrow that flows from one sub - band to another sub - band . i wanna try having the arrows go both ways . and , i ' m just gon na see if that better models , asynchrony in any way or . professor d: , that sounds interesting . anything to you wanted to no . silent partner in the meeting . , we got a laugh out of him , that 's good . ok , everyone h must contribute to the our sound files here . ok , so speaking of which , if we do n't have anything else that we need you happy with where we are ? know know wher know where we 're going ? professor d: , . you you happy ? you 're happy . ok everyone should be happy . you do n't have to be happy . you 're almost done . grad e: al - actually i should mention so if , about the linux machine swede . so it looks like the , neural net tools are installed there . and dan ellis i believe knows something about using that machine so if people are interested in getting jobs running on that maybe i could help with that . phd a: , but i if we really need now a lot of machines . we could start computing another huge table , we professor d: . , we want a different table , at least there 's some different things that we 're trying to get at now . so . , as far as you can tell , you 're actually ok on c - on cpu , for training and so on ? phd a: . so . , more is always better , but mmm , i do n't think we have to train a lot of networks , now that we know we just select what works fine and try to improve this professor d: , i ' m familiar with that one , alright , so , since , we did n't ha get a channel on for you , you do n't have to read any digits but the rest of us will . , is it on ? . we did n't i wo n't touch anything cuz i ' m afraid of making the driver crash which it seems to do , pretty easily . ok , so we 'll i 'll start off the connect the professor d: , let 's hope it works . maybe you should go first and see so that you 're ok . professor d: why do n't you go next then . we 're done . ok , so . just finished digits . , so . , it 's good . i we can turn off our microphones now . ###summary: the main topic for discussion by the berkeley meeting recorder group was progress on the experiments run as part of the groups main project , a speech recogniser for the cellular industry. this included reporting the results , and making conclusions to shape future work. also discussed were the details of the continued collaboration with project partner ogi. further investigation into the lack of difference using msg features makes should not be made while they are on their current short time scale for results. same goes for anything else that comes up and looks interesting , leave it for just now. really should pick which results are looking the best at this stage , and take only them further. someone should look closely at the non-timit databases , their viterbi alignments , and their phoneme strings to see is that is why timit is better. need to get ogi's system from them , and get it running like they do , before integrating into it. it is unclear if the english timit database is providing the best results because english is the best language or timit is the most accurately labelled dataset because it was hand labelled. the results table was very large and difficult to follow; it was unclear which of the numbers were error or accuracy rate , and straight rates or percentages of the baseline. there is very limited training data , over only a few conditions. test and real data is likely to encompass much more variability. speakers mn007 and fn002 have made further progress into the series of experiments they have been running in previous weeks; results were varied. the main conclusions include that training on task data is good , and the best broad training data is the english timit database. other results show that msg makes little difference , adding mlp improves when trained on task data , decreases figures when not , while using delta generally improves the situation , as does on-line normalization. starting work with a new broad database drawn from english , french , timit , spine and english and italian digits. mn007 has also started work on multi-band mlp trainings , with large context. ogi have a block diagram explaining their system , and the group are trying to fit their work into it. speaker me006 has been helping prepare data , but is mainly doing work for a class he takes , looking at modelling asynchrony.
24
professor b: somebody else should run this . i ' m sick of being the one to go through and say , " , what do you think about this ? " you wanna ? phd f: let 's see , maybe we should just get a list of items things that we should talk about . , i there 's the usual updates , everybody going around and saying , , what they 're working on , the things that happened the last week . but aside from that is there anything in particular that anybody wants to bring up phd f: for today ? no ? so why do n't we just around and people can give updates . , do you want to start , stephane ? phd c: alright . , the first thing maybe is that the p eurospeech paper is , accepted . phd c: so it 's the paper that describe the , system that were proposed for the aurora . phd c: so and the , fff comments seems from the reviewer are good . so . mmm phd c: then , whhh , i ' ve been working on t mainly on - line normalization this week . , i ' ve been trying different slightly different approaches . , the first thing is trying to play a little bit again with the , time constant . , second thing is , the training of , on - line normalization with two different means , one mean for the silence and one for the speech . and so i have two recursions which are controlled by the , probability of the voice activity detector . mmm . this actually do n't s does n't seem to help , although it does n't hurt . but , both on - line normalization approach seems equivalent . , they phd c: i did n't look , more closely . it might be , . - . , there is one thing that we can observe , is that the mean are more different for c - zero and c - one than for the other coefficients . and , it the c - one is there are strange thing happening with c - one , is that when you have different noises , the mean for the silence portion is can be different . so when you look at the trajectory of c - one , it 's has a strange shape and i was expecting th the s that these two mean helps , especially because of the strange c - ze c - one shape , which can like , yo you can have , a trajectory for the speech and then when you are in the silence it goes somewhere , but if the noise is different it goes somewhere else . so which would mean that if we estimate the mean based on all the signal , even though we have frame dropping , but we do n't frame ev , drop everything , but , this can hurts the estimation of the mean for speech , mmm . but i still have to investigate further , . , a third thing is , that instead of t having a fixed time constant , i try to have a time constant that 's smaller at the beginning of the utterances to adapt more quickly to the r something that 's closer to the right mean . t t and then this time constant increases and i have a threshold that phd c: , if it 's higher than a certain threshold , i keep it to this threshold to still , adapt , the mean when if the utterance is , long enough to continue to adapt after , like , one second or , this does n't help neither , but this does n't hurt . so , . it seems pretty phd f: was n't there some experiment you were gon na try where you did something differently for each , i whether it was each mel band or each , , fft bin or someth there was something you were gon na , some parameter you were gon na vary depending on the frequency . i if that was phd c: i it was i . no . u maybe it 's this idea of having different on - line normalization , tunings for the different mfcc 's . phd f: , morgan , you brought it up a couple meetings ago . and then it was something about , some and then somebody said " , it does seem like , c - zero is the one that 's , the major one " or , s i ca n't remember exactly what it was now . phd c: . there , actually , . s , it 's very important to normalize c - zero and much less to normalize the other coefficients . and , actu , at least with the current on - line normalization scheme . we , we know that normalizing c - one does n't help with the current scheme . and . in my idea , i was thinking that the reason is maybe because of these funny things that happen between speech and silence which have different means . but maybe it 's not so easy to professor b: , i really would like to suggest looking , a little bit at the kinds of errors . i know you can get lost in that and go forever and not see too much , but sometimes , but , just seeing that each of these things did n't make things better may not be enough . it may be that they 're making them better in some ways and worse in others , or increasing insertions and decreasing deletions , or , , helping with noisy case but hurting in quiet case . and if you saw that then maybe you it would something would occur to you of how to deal with that . phd c: w , so that 's it , for the on - line normalization . i ' ve been playing a little bit with some thresholding , and , mmm , as a first experiment , i , what i did is t is to take , to measure the average no , the maximum energy of s each utterance and then put a threshold , this for each mel band . then put a threshold that 's fifteen db below , a couple of db below this maximum , phd c: actually it was not a threshold , it was just adding noise . so i was adding a white noise energy , that 's fifteen db below the maximum energy of the utterance . when we look at the , mfcc that result from this , they are a lot more smoother . when we compare , like , a channel zero and channel one utterance , so a clean and , the same noisy utterance , there is almost no difference between the cepstral coefficients of the two . and the result that we have in term of speech recognition , actually it 's not worse , it 's not better neither , but it 's , surprising that it 's not worse because you add noise that 's fifteen db just fifteen db below the maximum energy . at least phd c: as you add a white noise that are has a very high energy , it whitens everything phd c: and the high - energy portion of the speech do n't get much affected anyway by the other noise . and as the noise you add is the same is the shape , it 's also the same . so they have the trajectory are very , very similar . and and professor b: so , again , if you trained in one noise and tested in the same noise , you 'd , given enough training data you do n't do b do badly . the reason that we d that we have the problems we have is because it 's different in training and test . even if the general kind is the same , the exact instances are different . and and so when you whiten it , then it 's like you the only noise to first order , the only th noise that you have is white noise and you ' ve added the same thing to training and test . so it 's , phd f: so would that be similar to , like , doing the smoothing , then , over time or ? phd c: it 's it 's something that , that affects more or less the silence portions because , anyway , the sp the portion of speech that ha have high energy are not ch a lot affected by the noises in the aurora database . if if you compare th the two shut channels of speechdat - car during speech portion , it 's n the mfcc are not very different . they are very different when energy 's lower , like during fricatives or during speech pauses . and , professor b: , but you 're still getting more recognition errors , which means that the differences , even though they look like they 're not so big , are hurting your recognition . phd c: it did n't . so , but in this case i really expect that maybe the two these two stream of features , they are very different . , and maybe we could gain something by combining them professor b: , the other thing is that you just picked one particular way of doing it . , first place it 's fifteen db , down across the utterance . and maybe you 'd want to have something that was a little more adaptive . secondly , you happened to pick fifteen db and maybe twenty 'd be better , or twelve . phd f: so what was the threshold part of it ? was the threshold , how far down ? professor b: , he had to figure out how much to add . so he was looking at the peak value . and then phd f: and and so what 's ho i do n't understand . how does it go ? if it if the peak value 's above some threshold , then you add the noise ? or if it 's below s phd c: i systematically add the noise , but the , noise level is just some threshold below the peak . phd c: which is not really noise , actually . it 's just adding a constant to each of the mel , energy . to each of the mel filter bank . so , it 's really , white noise . i th professor b: so then afterwards a log is taken , and that 's so why the little variation tends to go away . phd c: . so may , the this threshold is still a factor that we have to look at . and i , maybe a constant noise addition would be fine also , professor b: or or not constant but , varying over time is another way to go . were you using the normalization in addition to this ? , what was the rest of the system ? phd c: . it was it was , the same system . it was the same system . , . a third thing is that , i play a little bit with the , finding what was different between , and there were a couple of differences , like the lda filters were not the same . he had the france telecom blind equalization in the system . the number o of mfcc that was were used was different . you used thirteen and we used fifteen . , a bunch of differences . and , actually the result that he got were much better on ti - digits especially . so i ' m investigated to see what was the main factor for this difference . and it seems that the lda filter is was hurting . , so when we put s some noise compensation the , lda filter that 's derived from noisy speech is not more anymore optimal . and it makes a big difference , on ti - digits trained on clean . , if we use the old lda filter , the lda filter that was in the proposal , we have , like , eighty - two point seven percent recognition rate , on noisy speech when the system is trained on clean speech . and when we use the filter that 's derived from clean speech we jumped so from eighty - two point seven to eighty - five point one , which is a huge leap . so now the results are more similar , i do n't i will not , investigate on the other differences , which is like the number of mfcc that we keep and other small things that we can optimize later on anyway . professor b: but on the other hand if everybody is trying different kinds of noise suppression things and , it might be good to standardize on the piece that we 're not changing . so if there 's any particular reason to ha pick one or the other , which which one is closer to what the proposal was that was submitted to aurora ? are they both ? phd c: . th , the new system that i tested is , i , closer because it does n't have it have less of france telecom , professor b: , no , i ' m i , you 're trying to add in france telecom . professor b: tell them about the rest of it . like you said the number of filters might be different . or professor b: cep so , we 'd wanna standardize there , would n't we ? so , sh you guys should pick something and , all th all three of you . phd d: , so the right now , the system that is there in the what we have in the repositories , with uses fifteen . phd c: but we will use the lda filters f derived from clean speech . , actually it 's not the lda filter . phd d: so , we have n't w we have been always using , fifteen coefficients , not thirteen ? , that 's something 's then phd d: . ma - maybe we can , at least , i 'll t s run some experiments to see whether once i have this noise compensation to see whether thirteen and fifteen really matters or not . never tested it with the compensation , but without , compensation it was like fifteen was s slightly better than thirteen , so that 's why we stuck to thirteen . phd d: , that 's the other thing . , without noise compensation certainly c - zero is better than log energy . be - , because the there are more , mismatched conditions than the matching conditions for testing . always for the matched condition , you always get a slightly better performance for log energy than c - zero . but not for , for matched and the clean condition both , you get log energy you get a better performance with log energy . , maybe once we have this noise compensation , i , we have to try that also , whether we want to go for c - zero or log energy . we can see that . grad a: , still working on my quals preparation . , so i ' m thinking about , starting some , cheating experiments to , determine the , the relative effectiveness of , some intermediate categories that i want to classify . so , , if i know where voicing occurs and everything , i would do a phone , phone recognition experiment , somehow putting in the , the perfect knowledge that i have about voicing . so , in particular i was thinking , in the hybrid framework , just taking those lna files , and , setting to zero those probabilities that , that these phones are not voicing . so say , like , i know this particular segment is voicing , i would say , go into the corresponding lna file and zonk out the posteriors for , those phonemes that , are not voiced , and then see what kinds of improvements i get . and so this would be a useful thing , to know in terms of , like , which of these categories are good for , speech recognition . so , that 's i hope to get those , those experiments done by the time quals come around in july . phd f: so do you just take the probabilities of the other ones and spread them out evenly among the remaining ones ? grad a: i was thinking ok , so just set to some really low number , the non - voiced , phones . right ? and then renormalize . right . phd f: that will be really interesting to see , so then you 're gon na feed the those into some standard recognizer . phd f: , so then you 'll feed those so where do the outputs of the net go into if you 're doing phone recognition ? grad a: , the outputs of the net go into the standard , h , icsi hybrid , recognizer . so maybe , chronos phd f: an - and you 're gon na the you 're gon na do phone recognition with that ? grad a: so . and , another thing would be to extend this to , digits where look at whole words . and i would be able to see , not just , like , phoneme events , but , inter - phoneme events . so , like , this is from a stop to a vo a vocalic segment . so something that is transitional in nature . phd f: let 's see , i have n't done a whole lot on anything related to this week . i ' ve been focusing mainly on meeting recorder . so , i 'll just pass it on to dave . grad g: , ok . , in my lunch talk last week i said i 'd tried phase normalization and gotten garbage results using that l , long - term mean subtraction approach . it turned out there was a bug in my matlab code . so i tried it again , and , the results were better . i got intelligible speech back . but they still were n't as good as just subtracting the magnitude the log magnitude means . and also i ' ve been talking to , andreas and thilo about the , smartkom language model and about coming up with a good model for , far mike use of the smartkom system . so i ' m gon na be working on , implementing this mean subtraction approach in the far - mike system for the smartkom system , . and , one of the experiments we 're gon na do is , we 're gon na , train the a broadcast news net , which is because that 's what we ' ve been using so far , and , adapt it on some other data . , an - andreas wants to use , data that resembles read speech , like these digit readings , because he feels that the smartkom system interaction is not gon na be exactly conversational . s so actually i was wondering , how long does it take to train that broadcast news net ? professor b: so but , , you can get i if you even want to run the big one , , in the final system , cuz , it takes a little while to run it . so , you can scale it down by i ' m , it was two , three weeks for training up for the large broadcast news test set training set . i how much you 'd be training on . the full ? professor b: , i so if you trained on half as much and made the net , half as big , then it would be one fourth the amount of time and it 'd be nearly as good . also , i we had we ' ve had these , little di discussions i you ha have n't had a chance to work with it too much about , m other ways of taking care of the phase . so , i that was something i could say would be that we ' ve talked a little bit about professor b: you just doing it all with complex arithmetic and , and not , doing the polar representation with magnitude and phase . but it looks like there 's ways that one could potentially just work with the complex numbers and in principle get rid of the effects of the average complex spectrum . but grad g: actually , regarding the phase normalization so i did two experiments , and one is so , phases get added , modulo two pi , and because you only know the phase of the complex number t to a value modulo two pi . and so at first , that , what i should do is unwrap the phase because that will undo that . , but i actually got worse results doing that unwrapping using the simple phase unwrapper that 's in matlab than i did not unwrapping . professor b: so i ' m still hopeful that , we do n't even know if the phase is something the average phase is something that we do want to remove . , maybe there 's some deeper reason why it is n't the right thing to do . but , at least in principle it looks like there 's , a couple potential ways to do it . one one being to just work with the complex numbers , and , in rectangular coordinates . and the other is to , do a taylor series so you work with the complex numbers and then when you get the spectrum the average complex spectrum , actually divide it out , as opposed to taking the log and subtracting . so then , there might be some numerical issues . we do n't really know that . the other thing we talked a little bit about was taylor series expansion . and , , actually i was talking to dick karp about it a little bit , and , since i got thinking about it , and , so one thing is that y you 'd have to do , we may have to do this on a whiteboard , but you have to be a little careful about scaling the numbers that you 're taking the complex numbers that you 're taking the log of because the taylor expansion for it has , a square and a cube , and . and and so if you have a number that is modulus , , very different from one it should be right around one , if it 's cuz it 's a expansion of log one minus epsilon or o is one plus epsilon , or is it one plus ? , there 's an epsilon squared over two and an epsilon cubed over three , and . so if epsilon is bigger than one , then it diverges . so you have to do some scaling . but that 's not a big deal cuz it 's the log of k times a complex number , then you can just that 's the same as log of k plus log of the complex number . so there 's converges . but . phd d: so , i ' ve been , implementing this , wiener filtering for this aurora task . and , i actually thought it was doing fine when i tested it once . i it 's , like , using a small section of the code . and then i ran the whole recognition experiment with italian and i got , like , worse results than not using it . then i so , i ' ve been trying to find where the problem came from . and then it looks like i have some problem in the way there is some very silly bug somewhere . and , ugh ! , i , it actually i it actually made the whole thing worse . i was looking at the spectrograms that i got and it 's , like w it 's very horrible . like , when i professor b: i missed the v i was distracted . i missed the very first sentence . so then , i ' m a little lost on the rest . what what ? phd d: i actually implemented the wiener f fil filtering as a module and then tested it out separately . phd d: and it gave , like got the signal out and it was ok . so , i plugged it in somewhere and then , it 's like i had to remove some part and then plugging it in somewhere . and then i in that process i messed it up somewhere . so , it was real , it was all fine and then i ran it , and i got something worse than not using it . so , i was like i ' m trying to find where the m problem came , and it seems to be , like , somewhere some silly . and , the other thing , was , hynek showed up one suddenly on one day and then i was t talking wi phd d: so i was actually that day i was thinking about d doing something about the wiener filtering , and then carlos matter of . and then he showed up and then i told him . and then he gave me a whole bunch of filters what carlos used for his , thesis and then that was something which came up . and then , so , i ' m actually , thinking of using that also in this , w wiener filtering because that is a m modified wiener filtering approach , where instead of using the current frame , it uses adjacent frames also in designing the wiener filter . so instead of designing our own new wiener filters , i may just use one of those carlos filters in this implementation and see whether it actually gives me something better than using just the current f current frame , which is in a way , something like the smoothing the wiener filter but @ s so , i was h i ' m , like that so that is the next thing . once this i once i sort this pro , problem out maybe i 'll just go into that also . and the other thing was about the subspace approach . so , i , like , plugged some groupings for computing this eigen , s values and eigenvectors . so just @ some small block of things which i needed to put together for the subspace approach . and i ' m in the process of , like , building up that . { nonvocalsound } . i that 's it . and , th that 's where i am right now . phd e: mmm . i ' m working with vts . , i do several experiment with the spanish database first , only with vts and nothing more . not vad , no lda , nothing more . phd f: right , right . i ask you that every single meeting , do n't i ? professor b: it 's good to have some , cases of the same utterance at different times . phd e: vts . i ' m sor , the question is that remove some noise but not too much . and when we put the m the , vad , the result is better . and we put everything , the result is better , but it 's not better than the result that we have without vts . no , no . professor b: i see . so that @ given that you 're using the vad also , the effect of the vts is not so far professor b: do you how much of that do you think is due to just the particular implementation and how much you 're adjusting it ? or how much do you think is intrinsic to ? phd e: , i do the experiment using only the f onl , to use on only one fair estimation of the noise . and also i did some experiment , doing , a lying estimation of the noise . and , it 's a little bit better but not n phd c: maybe you have to standardize this thing also , noise estimation , because all the thing that you are testing use a different they all need some noise spectra professor b: i have an idea . if if , y you 're right . , each of these require this . , given that we 're going to have for this test at least of , boundaries , what if initially we start off by using known sections of nonspeech for the estimation ? professor b: s so , e , first place , even if ultimately we would n't be given the boundaries , this would be a good initial experiment to separate out the effects of things . , how much is the poor , relatively , unhelpful result that you 're getting in this or this is due to some inherent limitation to the method for these tasks and how much of it is just due to the fact that you 're not accurately finding enough regions that are really n noise ? so maybe if you tested it using that , you 'd have more reliable stretches of nonspeech to do the estimation from and see if that helps . phd e: another thing is the , the codebook , the initial codebook . that maybe , it 's too clean and cuz it 's a i . the methods if you want , you c say something about the method . in the because it 's a little bit different of the other method . , we have if this if this is the noise signal , { nonvocalsound } , in the log domain , we have something like this . now , we have something like this . and the idea of these methods is to n given a , how do you say ? i will read because it 's better for my english . i i given is the estimate of the pdf of the noise signal when we have a , a statistic of the clean speech and an statistic of the noisy speech . and the clean speech the statistic of the clean speech is from a codebook . mmm ? this is the idea . , like , this relation is not linear . the methods propose to develop this in a vectorial taylor series approximation . professor b: i ' m actually just confused about the equations you have up there . so , the top equation is phd e: no , this in the it 's this is the log domain . i must to say that . professor b: so this it 's the magnitude squared . ok , so you have power spectrum added there and down here you have you put the depends on t , but b all of this is just you just mean professor b: you just mean the log of the one up above . and , so that is x times , phd e: , log { nonvocalsound } e is equal , to log of x plus n . and , phd e: , we can say that e { nonvocalsound } is equal to log of , { nonvocalsound } , exponential of x plus exponential of n . phd e: , this is in the ti the time domain . , we have that , we have first that , x is equal , this is the frequency domain and we can put u that n the log domain log of x omega , but , in the time domain we have an exponential . no ? , maybe it 's i am i ' m problem . professor b: , just never mind what they are . , it 's just if x and n are variables professor b: the the log of x plus n is not the same as the log of e to the x plus e to the n . phd e: do this incorrectly . , the expression that appear in the paper , { nonvocalsound } is , professor b: . cuz it does n't just follow what 's there . it has to be some , taylor series phd d: y . if if you take log x into log one plus n by x , and then expand the log one plus n by x into taylor series phd c: , but the second expression that you put is the first - order expansion of the nonlinear relation between phd e: not exactly . it 's not the first space . , we have pfft , , we can put that x is equal i is equal to log of , mmm , we can put , this ? professor b: that , that the f top one does not imply the second one . because cuz the log of a sum is not the same as th phd e: , . but we can , we know that , the log of e plus b is equal to log of e plus log to b . and we can say here , it i professor b: n no , i do n't see how you get the second expression from the top one . the , just more generally here , if you say " log of , a plus b " , the log of a plus b is not or a plus b is not the , log of e to the a plus e to the b . phd e: i say if i apply log , i have , log of e is equal to log of , in this side , is equal to log of x professor b: so , . it 's just by definition that the individual that the , so , capital x is by definition the same as e to the little x because she 's saying that the little x is the , is the log . alright . phd e: , . that 's true . but this is correct ? and now do it , pfff ! put log { nonvocalsound } of ex plus log professor b: ok . so now once you get that one , then you do a first or second - order , taylor series expansion of this . phd e: this is another linear relation that this to develop this in vector s taylor series . and for that , the goal is to obtain , est estimate a pdf for the noisy speech when we have a statistic for clean speech and for the noisy speech . mmm ? and when w the way to obtain the pdf for the noisy speech is , we know this statistic and we know the noisy st , we can apply first order of the vector st taylor series of the of , the order that we want , increase the complexity of the problem . and then when we have a expression , for the mean and variance of the noisy speech , we apply a technique of minimum mean - square estimation to obtain the expected value of the clean speech given the this statistic for the noisy speech the statistic for clean speech and the statistic of the noisy speech . this only that . but the idea is that phd e: u we have our codebook with different density gaussian . we can expre we can put that the pdf for the clean test , probability of the clean speech is equal to professor b: how h how much in the work they reported , how much noisy speech did you need to get , good enough statistics for the to get this mapping ? professor b: cuz what 's certainly characteristic of a lot of the data in this test is that , you do n't have the training set may not be a great estimator for the noise in the test set . sometimes it is and sometimes it 's not . phd e: i the clean speech the codebook for clean speech , i am using timit . and i have now , sixty - four { nonvocalsound } gaus - gaussian . phd e: of the noise i estimate the noises wi , for the noises i only use one gaussian . phd e: the first experiment that i do it is solely to calculate the , mmm , this value , the compensation of the dictionary o one time using the noise at the f beginning of the sentence . this is the first experiment . and i fix this for all the sentences . , because , the vts methods the first thing that i do is to obtain , an expression for e probability e expression of e . that mean that the vts mmm , with the vts we obtain , we obtain the means for each gaussian and the variance . this is one . , this is the composition of the dictionary . this one thing . and the other thing that this with these methods is to , obtain to calculate this value . because we can write , we can write that the estimation of the clean speech is equal at an expected value of the clean speech conditional to , the noise signal the probability f of the statistic of the clean speech and the statistic of the noise . this is the methods that say that we 're going obtain this . and we can put that this is equal to the estimated value of e minus a function that conditional to e to the t to the noise signal . , this is this function is the term after develop this , the term that we take . give px and , p the noise . phd e: and put that this is equal to the noise signal minus , i put before this name , and calculate this . professor b: no , no . i ' m . in in the one you pointed at . what 's that variable ? phd e: but conditional . no , it 's condition it 's not exactly this . it 's modify . , if we have clean speech we have the dictionary for the clean speech , we have a probability f of our weight for each gaussian . and now , this weight is different now phd e: because it 's conditional . and this i need to calcu i know this and i know this because this is from the dictionary that you have . i need to calculate this . and for calculate this , i have an develop an expression that is phd e: that . calculate i calculated this value , with the statistic of the noisy speech that i calculated before with the vts approximation . and , normalizing . and i know everything . , with the , nnn when i develop this in s taylor series , i ca n't , calculate the mean and the variance of the for each of the gaussian of the dictionary for the noisy speech . now . and this is fixed . if i never do an estimat a newer estimation of the noise , this mean as mean and the variance are fixed . and for each s , frame of the speech the only thing that i need to do is to calculate this in order to calculate the estimation of the clean speech given our noisy speech . professor b: so , i ' m not following this perfectly i are you saying that all of these estimates are done using , estimates of the probability density for the noise that are calculated only from the first ten frames ? and never change throughout anything else ? phd e: it 's not it 's fixed , the dictionary . and the other estimation is when i do the on - line estimation , i change the means and variance of th for the noisy speech phd e: each time that i detect noise . i do it again this develop . estimate the new mean and the variance of the noisy speech . and with th with this new s new mean and variance i estimate again this . professor b: so you estimated , f completely forgetting what you had before ? , or is there some adaptation ? phd e: no , no . it 's not completely no , it 's i am doing something like an adaptation of the noise . professor b: now do we know , either from their experience or from yours , that , just having , two parameters , the mean and variance , is enough ? , i know you do n't have a lot of data to estimate with , professor b: no , i ' m talking about the noise . there 's only one gaussian . professor b: and you and it 's , right , it 's only one a minute . this is what 's the dimensionality of the gaussian ? professor b: twenty ? so it 's . so it 's actually forty numbers that you 're getting . , maybe you do n't have a professor b: this is this is , the question is , whether it would be helpful , i particularly if you used if you had more so , suppose you did this is almost cheating . it certainly is n't real - time . but if y suppose you use the real boundaries that you were given by the vad and or i we 're gon na be given even better boundaries than that . and you look you take all o all of the nonspeech components in an utterance , so you have a fair amount . do you benefit from having a better model for the noise ? that would be another question . professor b: so first question would be to what extent i are the errors that you 're still seeing based on the fact that you have poor boundaries for the , nonspeech ? and the second question might be , given that you have good boundaries , could you do better if you used more parameters to characterize the noise ? also another question might be , they are doing they 're using first term only of the vector taylor series ? , if you do a second term does it get too complicated cuz of the nonlinearity ? phd e: , it 's the for me it 's the first time that i am working with vts . professor b: no , it 's interesting . , w we have n't had anybody work with it before , so it 's interesting to get your feedback about it . phd e: it 's another type of approximation because i because it 's a statistic approximation to remove the noise . phd f: , i we 're about done . so some of the digit forms do n't have digits . , we ran out there were some blanks in there , so not everybody will be reading digits . but , i you ' ve got some . right , morgan ? phd f: so , why do n't you go ahead and start . and it 's just us down here at this end that have them .
the icsi meeting recorder group at berkeley met once more to discuss group members' progress. the majority of the group are working on tasks related to the aurora project , including on-line normalization and wiener filtering. other progress was also reported. a large part of the meeting was spent discussing calculations and approaches using the white-board in the room. at me013's behest , the group need to look closer at the errors made in tests on the aurora project , because the error rate may not be telling the whole picture. mn052 volunteers to run some experiments into how different numbers of mfccs affect results. some previously reported results from me026 were determined to be garbage due to a bug in the code speaker mn052 also feels that his strange results are down to a bug. this week , speaker mn007 has mostly been focusing on trying different approaches to on-line normalization , but making little impact on results. he has also been playing with thresholding , effectively adding white noise to the data , but again with minimal affect. also making little improvement is fn002's work with vectorial taylor series , as a means of dealing with noise. mn052 has been adding wiener filtering to the aurora task , and is thinking about future work on subspace. speaker me026 has been investigating phase normalization , and the possibility of adding spectral subtraction to an existing system. speaker me006 is still planning some cheating experiments to investigate features for recognition , alongside preparing for his quals. also , the groups submission to the eurospeech conference has been accepted.
###dialogue: professor b: somebody else should run this . i ' m sick of being the one to go through and say , " , what do you think about this ? " you wanna ? phd f: let 's see , maybe we should just get a list of items things that we should talk about . , i there 's the usual updates , everybody going around and saying , , what they 're working on , the things that happened the last week . but aside from that is there anything in particular that anybody wants to bring up phd f: for today ? no ? so why do n't we just around and people can give updates . , do you want to start , stephane ? phd c: alright . , the first thing maybe is that the p eurospeech paper is , accepted . phd c: so it 's the paper that describe the , system that were proposed for the aurora . phd c: so and the , fff comments seems from the reviewer are good . so . mmm phd c: then , whhh , i ' ve been working on t mainly on - line normalization this week . , i ' ve been trying different slightly different approaches . , the first thing is trying to play a little bit again with the , time constant . , second thing is , the training of , on - line normalization with two different means , one mean for the silence and one for the speech . and so i have two recursions which are controlled by the , probability of the voice activity detector . mmm . this actually do n't s does n't seem to help , although it does n't hurt . but , both on - line normalization approach seems equivalent . , they phd c: i did n't look , more closely . it might be , . - . , there is one thing that we can observe , is that the mean are more different for c - zero and c - one than for the other coefficients . and , it the c - one is there are strange thing happening with c - one , is that when you have different noises , the mean for the silence portion is can be different . so when you look at the trajectory of c - one , it 's has a strange shape and i was expecting th the s that these two mean helps , especially because of the strange c - ze c - one shape , which can like , yo you can have , a trajectory for the speech and then when you are in the silence it goes somewhere , but if the noise is different it goes somewhere else . so which would mean that if we estimate the mean based on all the signal , even though we have frame dropping , but we do n't frame ev , drop everything , but , this can hurts the estimation of the mean for speech , mmm . but i still have to investigate further , . , a third thing is , that instead of t having a fixed time constant , i try to have a time constant that 's smaller at the beginning of the utterances to adapt more quickly to the r something that 's closer to the right mean . t t and then this time constant increases and i have a threshold that phd c: , if it 's higher than a certain threshold , i keep it to this threshold to still , adapt , the mean when if the utterance is , long enough to continue to adapt after , like , one second or , this does n't help neither , but this does n't hurt . so , . it seems pretty phd f: was n't there some experiment you were gon na try where you did something differently for each , i whether it was each mel band or each , , fft bin or someth there was something you were gon na , some parameter you were gon na vary depending on the frequency . i if that was phd c: i it was i . no . u maybe it 's this idea of having different on - line normalization , tunings for the different mfcc 's . phd f: , morgan , you brought it up a couple meetings ago . and then it was something about , some and then somebody said " , it does seem like , c - zero is the one that 's , the major one " or , s i ca n't remember exactly what it was now . phd c: . there , actually , . s , it 's very important to normalize c - zero and much less to normalize the other coefficients . and , actu , at least with the current on - line normalization scheme . we , we know that normalizing c - one does n't help with the current scheme . and . in my idea , i was thinking that the reason is maybe because of these funny things that happen between speech and silence which have different means . but maybe it 's not so easy to professor b: , i really would like to suggest looking , a little bit at the kinds of errors . i know you can get lost in that and go forever and not see too much , but sometimes , but , just seeing that each of these things did n't make things better may not be enough . it may be that they 're making them better in some ways and worse in others , or increasing insertions and decreasing deletions , or , , helping with noisy case but hurting in quiet case . and if you saw that then maybe you it would something would occur to you of how to deal with that . phd c: w , so that 's it , for the on - line normalization . i ' ve been playing a little bit with some thresholding , and , mmm , as a first experiment , i , what i did is t is to take , to measure the average no , the maximum energy of s each utterance and then put a threshold , this for each mel band . then put a threshold that 's fifteen db below , a couple of db below this maximum , phd c: actually it was not a threshold , it was just adding noise . so i was adding a white noise energy , that 's fifteen db below the maximum energy of the utterance . when we look at the , mfcc that result from this , they are a lot more smoother . when we compare , like , a channel zero and channel one utterance , so a clean and , the same noisy utterance , there is almost no difference between the cepstral coefficients of the two . and the result that we have in term of speech recognition , actually it 's not worse , it 's not better neither , but it 's , surprising that it 's not worse because you add noise that 's fifteen db just fifteen db below the maximum energy . at least phd c: as you add a white noise that are has a very high energy , it whitens everything phd c: and the high - energy portion of the speech do n't get much affected anyway by the other noise . and as the noise you add is the same is the shape , it 's also the same . so they have the trajectory are very , very similar . and and professor b: so , again , if you trained in one noise and tested in the same noise , you 'd , given enough training data you do n't do b do badly . the reason that we d that we have the problems we have is because it 's different in training and test . even if the general kind is the same , the exact instances are different . and and so when you whiten it , then it 's like you the only noise to first order , the only th noise that you have is white noise and you ' ve added the same thing to training and test . so it 's , phd f: so would that be similar to , like , doing the smoothing , then , over time or ? phd c: it 's it 's something that , that affects more or less the silence portions because , anyway , the sp the portion of speech that ha have high energy are not ch a lot affected by the noises in the aurora database . if if you compare th the two shut channels of speechdat - car during speech portion , it 's n the mfcc are not very different . they are very different when energy 's lower , like during fricatives or during speech pauses . and , professor b: , but you 're still getting more recognition errors , which means that the differences , even though they look like they 're not so big , are hurting your recognition . phd c: it did n't . so , but in this case i really expect that maybe the two these two stream of features , they are very different . , and maybe we could gain something by combining them professor b: , the other thing is that you just picked one particular way of doing it . , first place it 's fifteen db , down across the utterance . and maybe you 'd want to have something that was a little more adaptive . secondly , you happened to pick fifteen db and maybe twenty 'd be better , or twelve . phd f: so what was the threshold part of it ? was the threshold , how far down ? professor b: , he had to figure out how much to add . so he was looking at the peak value . and then phd f: and and so what 's ho i do n't understand . how does it go ? if it if the peak value 's above some threshold , then you add the noise ? or if it 's below s phd c: i systematically add the noise , but the , noise level is just some threshold below the peak . phd c: which is not really noise , actually . it 's just adding a constant to each of the mel , energy . to each of the mel filter bank . so , it 's really , white noise . i th professor b: so then afterwards a log is taken , and that 's so why the little variation tends to go away . phd c: . so may , the this threshold is still a factor that we have to look at . and i , maybe a constant noise addition would be fine also , professor b: or or not constant but , varying over time is another way to go . were you using the normalization in addition to this ? , what was the rest of the system ? phd c: . it was it was , the same system . it was the same system . , . a third thing is that , i play a little bit with the , finding what was different between , and there were a couple of differences , like the lda filters were not the same . he had the france telecom blind equalization in the system . the number o of mfcc that was were used was different . you used thirteen and we used fifteen . , a bunch of differences . and , actually the result that he got were much better on ti - digits especially . so i ' m investigated to see what was the main factor for this difference . and it seems that the lda filter is was hurting . , so when we put s some noise compensation the , lda filter that 's derived from noisy speech is not more anymore optimal . and it makes a big difference , on ti - digits trained on clean . , if we use the old lda filter , the lda filter that was in the proposal , we have , like , eighty - two point seven percent recognition rate , on noisy speech when the system is trained on clean speech . and when we use the filter that 's derived from clean speech we jumped so from eighty - two point seven to eighty - five point one , which is a huge leap . so now the results are more similar , i do n't i will not , investigate on the other differences , which is like the number of mfcc that we keep and other small things that we can optimize later on anyway . professor b: but on the other hand if everybody is trying different kinds of noise suppression things and , it might be good to standardize on the piece that we 're not changing . so if there 's any particular reason to ha pick one or the other , which which one is closer to what the proposal was that was submitted to aurora ? are they both ? phd c: . th , the new system that i tested is , i , closer because it does n't have it have less of france telecom , professor b: , no , i ' m i , you 're trying to add in france telecom . professor b: tell them about the rest of it . like you said the number of filters might be different . or professor b: cep so , we 'd wanna standardize there , would n't we ? so , sh you guys should pick something and , all th all three of you . phd d: , so the right now , the system that is there in the what we have in the repositories , with uses fifteen . phd c: but we will use the lda filters f derived from clean speech . , actually it 's not the lda filter . phd d: so , we have n't w we have been always using , fifteen coefficients , not thirteen ? , that 's something 's then phd d: . ma - maybe we can , at least , i 'll t s run some experiments to see whether once i have this noise compensation to see whether thirteen and fifteen really matters or not . never tested it with the compensation , but without , compensation it was like fifteen was s slightly better than thirteen , so that 's why we stuck to thirteen . phd d: , that 's the other thing . , without noise compensation certainly c - zero is better than log energy . be - , because the there are more , mismatched conditions than the matching conditions for testing . always for the matched condition , you always get a slightly better performance for log energy than c - zero . but not for , for matched and the clean condition both , you get log energy you get a better performance with log energy . , maybe once we have this noise compensation , i , we have to try that also , whether we want to go for c - zero or log energy . we can see that . grad a: , still working on my quals preparation . , so i ' m thinking about , starting some , cheating experiments to , determine the , the relative effectiveness of , some intermediate categories that i want to classify . so , , if i know where voicing occurs and everything , i would do a phone , phone recognition experiment , somehow putting in the , the perfect knowledge that i have about voicing . so , in particular i was thinking , in the hybrid framework , just taking those lna files , and , setting to zero those probabilities that , that these phones are not voicing . so say , like , i know this particular segment is voicing , i would say , go into the corresponding lna file and zonk out the posteriors for , those phonemes that , are not voiced , and then see what kinds of improvements i get . and so this would be a useful thing , to know in terms of , like , which of these categories are good for , speech recognition . so , that 's i hope to get those , those experiments done by the time quals come around in july . phd f: so do you just take the probabilities of the other ones and spread them out evenly among the remaining ones ? grad a: i was thinking ok , so just set to some really low number , the non - voiced , phones . right ? and then renormalize . right . phd f: that will be really interesting to see , so then you 're gon na feed the those into some standard recognizer . phd f: , so then you 'll feed those so where do the outputs of the net go into if you 're doing phone recognition ? grad a: , the outputs of the net go into the standard , h , icsi hybrid , recognizer . so maybe , chronos phd f: an - and you 're gon na the you 're gon na do phone recognition with that ? grad a: so . and , another thing would be to extend this to , digits where look at whole words . and i would be able to see , not just , like , phoneme events , but , inter - phoneme events . so , like , this is from a stop to a vo a vocalic segment . so something that is transitional in nature . phd f: let 's see , i have n't done a whole lot on anything related to this week . i ' ve been focusing mainly on meeting recorder . so , i 'll just pass it on to dave . grad g: , ok . , in my lunch talk last week i said i 'd tried phase normalization and gotten garbage results using that l , long - term mean subtraction approach . it turned out there was a bug in my matlab code . so i tried it again , and , the results were better . i got intelligible speech back . but they still were n't as good as just subtracting the magnitude the log magnitude means . and also i ' ve been talking to , andreas and thilo about the , smartkom language model and about coming up with a good model for , far mike use of the smartkom system . so i ' m gon na be working on , implementing this mean subtraction approach in the far - mike system for the smartkom system , . and , one of the experiments we 're gon na do is , we 're gon na , train the a broadcast news net , which is because that 's what we ' ve been using so far , and , adapt it on some other data . , an - andreas wants to use , data that resembles read speech , like these digit readings , because he feels that the smartkom system interaction is not gon na be exactly conversational . s so actually i was wondering , how long does it take to train that broadcast news net ? professor b: so but , , you can get i if you even want to run the big one , , in the final system , cuz , it takes a little while to run it . so , you can scale it down by i ' m , it was two , three weeks for training up for the large broadcast news test set training set . i how much you 'd be training on . the full ? professor b: , i so if you trained on half as much and made the net , half as big , then it would be one fourth the amount of time and it 'd be nearly as good . also , i we had we ' ve had these , little di discussions i you ha have n't had a chance to work with it too much about , m other ways of taking care of the phase . so , i that was something i could say would be that we ' ve talked a little bit about professor b: you just doing it all with complex arithmetic and , and not , doing the polar representation with magnitude and phase . but it looks like there 's ways that one could potentially just work with the complex numbers and in principle get rid of the effects of the average complex spectrum . but grad g: actually , regarding the phase normalization so i did two experiments , and one is so , phases get added , modulo two pi , and because you only know the phase of the complex number t to a value modulo two pi . and so at first , that , what i should do is unwrap the phase because that will undo that . , but i actually got worse results doing that unwrapping using the simple phase unwrapper that 's in matlab than i did not unwrapping . professor b: so i ' m still hopeful that , we do n't even know if the phase is something the average phase is something that we do want to remove . , maybe there 's some deeper reason why it is n't the right thing to do . but , at least in principle it looks like there 's , a couple potential ways to do it . one one being to just work with the complex numbers , and , in rectangular coordinates . and the other is to , do a taylor series so you work with the complex numbers and then when you get the spectrum the average complex spectrum , actually divide it out , as opposed to taking the log and subtracting . so then , there might be some numerical issues . we do n't really know that . the other thing we talked a little bit about was taylor series expansion . and , , actually i was talking to dick karp about it a little bit , and , since i got thinking about it , and , so one thing is that y you 'd have to do , we may have to do this on a whiteboard , but you have to be a little careful about scaling the numbers that you 're taking the complex numbers that you 're taking the log of because the taylor expansion for it has , a square and a cube , and . and and so if you have a number that is modulus , , very different from one it should be right around one , if it 's cuz it 's a expansion of log one minus epsilon or o is one plus epsilon , or is it one plus ? , there 's an epsilon squared over two and an epsilon cubed over three , and . so if epsilon is bigger than one , then it diverges . so you have to do some scaling . but that 's not a big deal cuz it 's the log of k times a complex number , then you can just that 's the same as log of k plus log of the complex number . so there 's converges . but . phd d: so , i ' ve been , implementing this , wiener filtering for this aurora task . and , i actually thought it was doing fine when i tested it once . i it 's , like , using a small section of the code . and then i ran the whole recognition experiment with italian and i got , like , worse results than not using it . then i so , i ' ve been trying to find where the problem came from . and then it looks like i have some problem in the way there is some very silly bug somewhere . and , ugh ! , i , it actually i it actually made the whole thing worse . i was looking at the spectrograms that i got and it 's , like w it 's very horrible . like , when i professor b: i missed the v i was distracted . i missed the very first sentence . so then , i ' m a little lost on the rest . what what ? phd d: i actually implemented the wiener f fil filtering as a module and then tested it out separately . phd d: and it gave , like got the signal out and it was ok . so , i plugged it in somewhere and then , it 's like i had to remove some part and then plugging it in somewhere . and then i in that process i messed it up somewhere . so , it was real , it was all fine and then i ran it , and i got something worse than not using it . so , i was like i ' m trying to find where the m problem came , and it seems to be , like , somewhere some silly . and , the other thing , was , hynek showed up one suddenly on one day and then i was t talking wi phd d: so i was actually that day i was thinking about d doing something about the wiener filtering , and then carlos matter of . and then he showed up and then i told him . and then he gave me a whole bunch of filters what carlos used for his , thesis and then that was something which came up . and then , so , i ' m actually , thinking of using that also in this , w wiener filtering because that is a m modified wiener filtering approach , where instead of using the current frame , it uses adjacent frames also in designing the wiener filter . so instead of designing our own new wiener filters , i may just use one of those carlos filters in this implementation and see whether it actually gives me something better than using just the current f current frame , which is in a way , something like the smoothing the wiener filter but @ s so , i was h i ' m , like that so that is the next thing . once this i once i sort this pro , problem out maybe i 'll just go into that also . and the other thing was about the subspace approach . so , i , like , plugged some groupings for computing this eigen , s values and eigenvectors . so just @ some small block of things which i needed to put together for the subspace approach . and i ' m in the process of , like , building up that . { nonvocalsound } . i that 's it . and , th that 's where i am right now . phd e: mmm . i ' m working with vts . , i do several experiment with the spanish database first , only with vts and nothing more . not vad , no lda , nothing more . phd f: right , right . i ask you that every single meeting , do n't i ? professor b: it 's good to have some , cases of the same utterance at different times . phd e: vts . i ' m sor , the question is that remove some noise but not too much . and when we put the m the , vad , the result is better . and we put everything , the result is better , but it 's not better than the result that we have without vts . no , no . professor b: i see . so that @ given that you 're using the vad also , the effect of the vts is not so far professor b: do you how much of that do you think is due to just the particular implementation and how much you 're adjusting it ? or how much do you think is intrinsic to ? phd e: , i do the experiment using only the f onl , to use on only one fair estimation of the noise . and also i did some experiment , doing , a lying estimation of the noise . and , it 's a little bit better but not n phd c: maybe you have to standardize this thing also , noise estimation , because all the thing that you are testing use a different they all need some noise spectra professor b: i have an idea . if if , y you 're right . , each of these require this . , given that we 're going to have for this test at least of , boundaries , what if initially we start off by using known sections of nonspeech for the estimation ? professor b: s so , e , first place , even if ultimately we would n't be given the boundaries , this would be a good initial experiment to separate out the effects of things . , how much is the poor , relatively , unhelpful result that you 're getting in this or this is due to some inherent limitation to the method for these tasks and how much of it is just due to the fact that you 're not accurately finding enough regions that are really n noise ? so maybe if you tested it using that , you 'd have more reliable stretches of nonspeech to do the estimation from and see if that helps . phd e: another thing is the , the codebook , the initial codebook . that maybe , it 's too clean and cuz it 's a i . the methods if you want , you c say something about the method . in the because it 's a little bit different of the other method . , we have if this if this is the noise signal , { nonvocalsound } , in the log domain , we have something like this . now , we have something like this . and the idea of these methods is to n given a , how do you say ? i will read because it 's better for my english . i i given is the estimate of the pdf of the noise signal when we have a , a statistic of the clean speech and an statistic of the noisy speech . and the clean speech the statistic of the clean speech is from a codebook . mmm ? this is the idea . , like , this relation is not linear . the methods propose to develop this in a vectorial taylor series approximation . professor b: i ' m actually just confused about the equations you have up there . so , the top equation is phd e: no , this in the it 's this is the log domain . i must to say that . professor b: so this it 's the magnitude squared . ok , so you have power spectrum added there and down here you have you put the depends on t , but b all of this is just you just mean professor b: you just mean the log of the one up above . and , so that is x times , phd e: , log { nonvocalsound } e is equal , to log of x plus n . and , phd e: , we can say that e { nonvocalsound } is equal to log of , { nonvocalsound } , exponential of x plus exponential of n . phd e: , this is in the ti the time domain . , we have that , we have first that , x is equal , this is the frequency domain and we can put u that n the log domain log of x omega , but , in the time domain we have an exponential . no ? , maybe it 's i am i ' m problem . professor b: , just never mind what they are . , it 's just if x and n are variables professor b: the the log of x plus n is not the same as the log of e to the x plus e to the n . phd e: do this incorrectly . , the expression that appear in the paper , { nonvocalsound } is , professor b: . cuz it does n't just follow what 's there . it has to be some , taylor series phd d: y . if if you take log x into log one plus n by x , and then expand the log one plus n by x into taylor series phd c: , but the second expression that you put is the first - order expansion of the nonlinear relation between phd e: not exactly . it 's not the first space . , we have pfft , , we can put that x is equal i is equal to log of , mmm , we can put , this ? professor b: that , that the f top one does not imply the second one . because cuz the log of a sum is not the same as th phd e: , . but we can , we know that , the log of e plus b is equal to log of e plus log to b . and we can say here , it i professor b: n no , i do n't see how you get the second expression from the top one . the , just more generally here , if you say " log of , a plus b " , the log of a plus b is not or a plus b is not the , log of e to the a plus e to the b . phd e: i say if i apply log , i have , log of e is equal to log of , in this side , is equal to log of x professor b: so , . it 's just by definition that the individual that the , so , capital x is by definition the same as e to the little x because she 's saying that the little x is the , is the log . alright . phd e: , . that 's true . but this is correct ? and now do it , pfff ! put log { nonvocalsound } of ex plus log professor b: ok . so now once you get that one , then you do a first or second - order , taylor series expansion of this . phd e: this is another linear relation that this to develop this in vector s taylor series . and for that , the goal is to obtain , est estimate a pdf for the noisy speech when we have a statistic for clean speech and for the noisy speech . mmm ? and when w the way to obtain the pdf for the noisy speech is , we know this statistic and we know the noisy st , we can apply first order of the vector st taylor series of the of , the order that we want , increase the complexity of the problem . and then when we have a expression , for the mean and variance of the noisy speech , we apply a technique of minimum mean - square estimation to obtain the expected value of the clean speech given the this statistic for the noisy speech the statistic for clean speech and the statistic of the noisy speech . this only that . but the idea is that phd e: u we have our codebook with different density gaussian . we can expre we can put that the pdf for the clean test , probability of the clean speech is equal to professor b: how h how much in the work they reported , how much noisy speech did you need to get , good enough statistics for the to get this mapping ? professor b: cuz what 's certainly characteristic of a lot of the data in this test is that , you do n't have the training set may not be a great estimator for the noise in the test set . sometimes it is and sometimes it 's not . phd e: i the clean speech the codebook for clean speech , i am using timit . and i have now , sixty - four { nonvocalsound } gaus - gaussian . phd e: of the noise i estimate the noises wi , for the noises i only use one gaussian . phd e: the first experiment that i do it is solely to calculate the , mmm , this value , the compensation of the dictionary o one time using the noise at the f beginning of the sentence . this is the first experiment . and i fix this for all the sentences . , because , the vts methods the first thing that i do is to obtain , an expression for e probability e expression of e . that mean that the vts mmm , with the vts we obtain , we obtain the means for each gaussian and the variance . this is one . , this is the composition of the dictionary . this one thing . and the other thing that this with these methods is to , obtain to calculate this value . because we can write , we can write that the estimation of the clean speech is equal at an expected value of the clean speech conditional to , the noise signal the probability f of the statistic of the clean speech and the statistic of the noise . this is the methods that say that we 're going obtain this . and we can put that this is equal to the estimated value of e minus a function that conditional to e to the t to the noise signal . , this is this function is the term after develop this , the term that we take . give px and , p the noise . phd e: and put that this is equal to the noise signal minus , i put before this name , and calculate this . professor b: no , no . i ' m . in in the one you pointed at . what 's that variable ? phd e: but conditional . no , it 's condition it 's not exactly this . it 's modify . , if we have clean speech we have the dictionary for the clean speech , we have a probability f of our weight for each gaussian . and now , this weight is different now phd e: because it 's conditional . and this i need to calcu i know this and i know this because this is from the dictionary that you have . i need to calculate this . and for calculate this , i have an develop an expression that is phd e: that . calculate i calculated this value , with the statistic of the noisy speech that i calculated before with the vts approximation . and , normalizing . and i know everything . , with the , nnn when i develop this in s taylor series , i ca n't , calculate the mean and the variance of the for each of the gaussian of the dictionary for the noisy speech . now . and this is fixed . if i never do an estimat a newer estimation of the noise , this mean as mean and the variance are fixed . and for each s , frame of the speech the only thing that i need to do is to calculate this in order to calculate the estimation of the clean speech given our noisy speech . professor b: so , i ' m not following this perfectly i are you saying that all of these estimates are done using , estimates of the probability density for the noise that are calculated only from the first ten frames ? and never change throughout anything else ? phd e: it 's not it 's fixed , the dictionary . and the other estimation is when i do the on - line estimation , i change the means and variance of th for the noisy speech phd e: each time that i detect noise . i do it again this develop . estimate the new mean and the variance of the noisy speech . and with th with this new s new mean and variance i estimate again this . professor b: so you estimated , f completely forgetting what you had before ? , or is there some adaptation ? phd e: no , no . it 's not completely no , it 's i am doing something like an adaptation of the noise . professor b: now do we know , either from their experience or from yours , that , just having , two parameters , the mean and variance , is enough ? , i know you do n't have a lot of data to estimate with , professor b: no , i ' m talking about the noise . there 's only one gaussian . professor b: and you and it 's , right , it 's only one a minute . this is what 's the dimensionality of the gaussian ? professor b: twenty ? so it 's . so it 's actually forty numbers that you 're getting . , maybe you do n't have a professor b: this is this is , the question is , whether it would be helpful , i particularly if you used if you had more so , suppose you did this is almost cheating . it certainly is n't real - time . but if y suppose you use the real boundaries that you were given by the vad and or i we 're gon na be given even better boundaries than that . and you look you take all o all of the nonspeech components in an utterance , so you have a fair amount . do you benefit from having a better model for the noise ? that would be another question . professor b: so first question would be to what extent i are the errors that you 're still seeing based on the fact that you have poor boundaries for the , nonspeech ? and the second question might be , given that you have good boundaries , could you do better if you used more parameters to characterize the noise ? also another question might be , they are doing they 're using first term only of the vector taylor series ? , if you do a second term does it get too complicated cuz of the nonlinearity ? phd e: , it 's the for me it 's the first time that i am working with vts . professor b: no , it 's interesting . , w we have n't had anybody work with it before , so it 's interesting to get your feedback about it . phd e: it 's another type of approximation because i because it 's a statistic approximation to remove the noise . phd f: , i we 're about done . so some of the digit forms do n't have digits . , we ran out there were some blanks in there , so not everybody will be reading digits . but , i you ' ve got some . right , morgan ? phd f: so , why do n't you go ahead and start . and it 's just us down here at this end that have them . ###summary: the icsi meeting recorder group at berkeley met once more to discuss group members' progress. the majority of the group are working on tasks related to the aurora project , including on-line normalization and wiener filtering. other progress was also reported. a large part of the meeting was spent discussing calculations and approaches using the white-board in the room. at me013's behest , the group need to look closer at the errors made in tests on the aurora project , because the error rate may not be telling the whole picture. mn052 volunteers to run some experiments into how different numbers of mfccs affect results. some previously reported results from me026 were determined to be garbage due to a bug in the code speaker mn052 also feels that his strange results are down to a bug. this week , speaker mn007 has mostly been focusing on trying different approaches to on-line normalization , but making little impact on results. he has also been playing with thresholding , effectively adding white noise to the data , but again with minimal affect. also making little improvement is fn002's work with vectorial taylor series , as a means of dealing with noise. mn052 has been adding wiener filtering to the aurora task , and is thinking about future work on subspace. speaker me026 has been investigating phase normalization , and the possibility of adding spectral subtraction to an existing system. speaker me006 is still planning some cheating experiments to investigate features for recognition , alongside preparing for his quals. also , the groups submission to the eurospeech conference has been accepted.
6
professor b: it 's april fifth . actually , hynek should be getting back in town shortly if he is n't already . professor b: u u , i meant , this end of the world , is really what i meant , phd c: , i did some experim , just a few more experiments before i had to , go away for the w , that week . phd c: was it last week or whenever ? , so what i was started playing with was the th again , this is the htk back - end . and , i was curious because the way that they train up the models , they go through about four rounds of training . and in the first round they do , it 's three iterations , and for the last three rounds e they do seven iterations of re - estimation in each of those three . and so , that 's part of what takes so long to train the back - end for this . professor b: i ' m , i did n't quite get that . there 's there 's four and there 's seven and i ' m . phd c: maybe i should write it on the board . so , there 's four rounds of training . i g i you could say iterations . the first one is three , then seven , and seven . and what these numbers refer to is the number of times that the , hmm re - estimation is run . it 's this program called h e professor b: but in htk , what 's the difference between , a an inner loop and an outer loop in these iterations ? phd c: ok . so what happens is , at each one of these points , you increase the number of gaussians in the model . phd c: and so , in the final one here , you end up with , for all of the digit words , you end up with , three mixtures per state , in the final thing . so i had done some experiments where i was i want to play with the number of mixtures . phd c: but , , i wanted to first test to see if we actually need to do this many iterations early on . phd c: , i ran a couple of experiments where i reduced that to l to be three , two , five , and i got almost the exact same results . and but it runs much faster . so , m it only took something like , three or four hours to do the full training , phd c: as opposed to wh what , sixteen hours like that ? , it takes you have to do an overnight , the way it is set up now . phd c: so , even we do n't do anything else , doing something like this could allow us to turn experiments around a lot faster . professor b: and then when you have your final thing , do a full one , so it 's phd c: and when you have your final thing , we go back to this . and it 's a real simple change to make . , it 's like one little text file you edit and change those numbers , and you do n't do anything else . phd c: so it 's a very simple change to make and it does n't seem to hurt all that much . phd c: so i , i have to look to see what the exact numbers were . was , like , three , two , five , but i 'll double check . it was over a week ago that i did it , phd c: so i ca n't remember exactly . but , but it 's so much faster . i it makes a big difference . so we could do a lot more experiments and throw a lot more in there . phd c: , the other thing that i did was , i compiled the htk for the linux boxes . so we have this big thing that we got from ibm , which is a five - processor machine . really fast , but it 's running linux . so , you can now run your experiments on that machine and you can run five at a time and it runs , as fast as , , five different machines . i ' ve forgotten now what the name of that machine is but i can send email around about it . and so we ' ve got it now htk 's compiled for both the linux and for , the sparcs . , you have to make that in your dot cshrc , it detects whether you 're running on the linux or a sparc and points to the right executables . and you may not have had that in your dot cshrc before , if you were always just running the sparc . , i can tell you exactly what you need to do to get all of that to work . but it 'll it really increases what we can run on . phd c: so , together with the fact that we ' ve got these faster linux boxes and that it takes less time to do these , we should be able to crank through a lot more experiments . so after i did that , then what i wanted to do was try increasing the number of mixtures , just to see , see how that affects performance . professor b: . , you could do something like keep exactly the same procedure and then add a fifth thing onto it grad e: so at the middle o where the arrows are showing , that 's you 're adding one more mixture per state , or ? phd c: let 's see , it goes from this , try to go it backwards this at this point it 's two mixtures per state . so this just adds one . except that , actually for the silence model , it 's six mixtures per state . , so it goes to two . phd c: . it 's , shoot . i ca n't remember now what happens at that first one . , i have to look it up and see . phd c: there because they start off with , an initial model which is just this global model , and then they split it to the individuals . and so , it may be that 's what 's happening here . i have to look it up and see . i do n't exactly remember . so . that 's it . phd a: there was a conference call this tuesday . i yet the what happened tuesday , but the points that they were supposed to discuss is still , things like the weights , professor b: do who was since we were n't in on it , do who was in from ogi ? was was hynek involved or was it sunil phd a: , so the points were the weights how to weight the different error rates that are obtained from different language and conditions . it 's not clear that they will keep the same weighting . right now it 's a weighting on improvement . some people are arguing that it would be better to have weights on , to combine error rates before computing improvement . , and the fact is that for right now for the english , they have weights they combine error rates , but for the other languages they combine improvement . so it 's not very consistent . the , and so , this is a point . and right now actually there is a thing also , that happens with the current weight is that a very non - significant improvement on the - matched case result in huge differences in the final number . and so , perhaps they will change the weights to phd c: how should that be done ? , it seems like there 's a simple way , this seems like an obvious mistake . professor b: i have n't thought it through , but one would think that each it it 's like if you say what 's the best way to do an average , an arithmetic average or a geometric average ? it depends what you wanna show . each each one is gon na have a different characteristic . so phd c: , it seems like they should do , like , the percentage improvement , rather than the absolute improvement . professor b: , they are doing that . no , that is relative . but the question is , do you average the relative improvements or do you average the error rates and take the relative improvement maybe of that ? and it 's not just a pure average because there are these weightings . it 's a weighted average . phd a: and so when you average the relative improvement it tends to give a lot of , importance to the - matched case because the baseline is already very good and , i it 's phd c: why do n't they not look at improvements but just look at your av your scores ? , figure out how to combine the scores with a weight or whatever , and then give you a score here 's your score . and then they can do the same thing for the baseline system and here 's its score . and then you can look at professor b: , that 's what he 's seeing as one of the things they could do . it 's just when you get all done , that they pro i m i was n't there but they started off this process with the notion that you should be significantly better than the previous standard . and , so they said " how much is significantly better ? what do you ? " and and so they said " , you should have half the errors , " that you had before " . so it 's , but it does seem like i it does seem like it 's more logical to combine them first and then do the phd a: combine error rates and then but there is this still this problem of weights . when when you combine error rate it tends to give more importance to the difficult cases , and some people think that phd a: , they have different , opinions about this . some people think that it 's more important to look at to have ten percent imp relative improvement on - matched case than to have fifty percent on the m mismatched , and other people think that it 's more important to improve a lot on the mismatch and so , bu phd c: it sounds like they do n't really have a good idea about what the final application is gon na be . professor b: , that if you look at the numbers on the more difficult cases , if you really believe that was gon na be the predominant use , none of this would be good enough . nothing anybody 's whereas you with some reasonable error recovery could imagine in the better cases that these systems working . so , the hope would be that it would , it would work for the good cases and , it would have reasonable reas soft degradation as you got to worse and worse conditions . phd c: i what i ' m , i was thinking about it in terms of , if i were building the final product and i was gon na test to see which front - end i 'd i wanted to use , i would try to weight things depending on the exact environment that i was gon na be using the system in . professor b: , no . , it is n't the operating theater . , they don they do n't really know , . , i th phd c: so if they , does n't that suggest the way for them to go ? you assume everything 's equal . , y , you professor b: , one thing to do is to just not rely on a single number to maybe have two or three numbers , and say here 's how much you , you improve the , the relatively clean case and here 's or - matched case , and here 's how much you , professor b: , actually it 's true . , i had forgotten this , but , - matched is not actually clean . what it is just that , u , the training and testing are similar . professor b: i what you would do in practice is you 'd try to get as many , examples of similar as you could , and then , so the argument for that being the more important thing , is that you 're gon na try and do that , but you wanna see how badly it deviates from that when the , it 's a little different . professor b: that 's an ar , that 's an argument for it , but let me give you the opposite argument . the opposite argument is you 're never really gon na have a good sample of all these different things . , are you gon na have w , examples with the windows open , half open , full open ? going seventy , sixty , fifty , forty miles an hour ? on what roads ? with what passing you ? with , i that you could make the opposite argument that the - matched case is a fantasy . so , professor b: that if you look at the - matched case versus the po , the medium and the fo and then the mismatched case , we 're seeing really , really big differences in performance . right ? and and y you would n't like that to be the case . you would n't like that as soon as you step outside , a lot of the cases it 's is professor b: , in these cases , if you go from the , i do n't remember the numbers right off , but if you go from the - matched case to the medium , it 's not an enormous difference in the training - testing situation , and it 's a really big performance drop . , the reference one , this is back old on , on italian , was like six percent error for the - matched and eighteen for the medium - matched and sixty for the for highly - mismatched . and , with these other systems we helped it out quite a bit , but still there 's something like a factor of two between - matched and medium - matched . and so that if what you 're if the goal of this is to come up with robust features , it does mean so you could argue , that the - matched is something you should n't be looking , that the goal is to come up with features that will still give you reasonable performance , with again gentle degregra degradation , even though the testing condition is not the same as the training . so , i could argue strongly that something like the medium mismatch , which is not compl pathological but , what was the medium - mismatch condition again ? phd a: , it 's medium mismatch is everything with the far microphone , but trained on , like , low noisy condition , like low speed and or stopped car and tested on high - speed conditions , like on a highway professor b: but , it 's there 's a mismatch between the car conditions . and that 's , you could argue that 's a pretty realistic situation and , i 'd almost argue for weighting that highest . but the way they have it now , it 's i it 's they they compute the relative improvement first and then average that with a weighting ? and so then the that makes the highly - matched the really big thing . so , u i since they have these three categories , it seems like the reasonable thing to do is to go across the languages and to come up with an improvement for each of those . just say " ok , in the highly - matched case this is what happens , in the m the , this other m medium if this happens , in the highly - mismatched that happens " . you should see , a gentle degradation through that . i . that i gather that in these meetings it 's really tricky to make anything ac make any policy change because everybody has , their own opinion phd a: , so but there is probably a big change that will be made is that the baseline th they want to have a new baseline , perhaps , which is , mfcc but with a voice activity detector . and , some people are pushing to still keep this fifty percent number . so they want to have at least fifty percent improvement on the baseline , but w which would be a much better baseline . and if we look at the result that sunil sent , just putting the vad in the baseline improved , like , more than twenty percent , which would mean then mean that fifty percent on this new baseline is like , more than sixty percent improvement on o e phd a: , they did n't decide yet . i i this was one point of the conference call also , mmm , so i . professor b: , th that would be good . , it 's not that the design of the vad is n't important , but it 's just that it does seem to be i , a lot of work to do a good job on that and as being a lot of work to do a good job on the feature design , so if we can cut down on that maybe we can make some progress . phd a: m but i perhaps i w . , . per - e s someone told that perhaps it 's not fair to do that because the , to make a good vad you do n't have enough to with the features that are the baseline features . you need more features . so you really need to put more in the front - end . so i s professor b: so y so you m s , but , let 's say for ins see , mfcc does n't have anything in it , related to the pitch . so just . so suppose you ' ve that what you really wanna do is put a good pitch detector on there and if it gets an unambiguous professor b: if it gets an unambiguous result then you 're definitely in a voice in a , s region with speech . phd c: so there 's this assumption that the v the voice activity detector can only use the mfcc ? professor b: , for the baseline . so so if you use other features then y but it 's just a question of what is your baseline . what is it that you 're supposed to do better than ? professor b: and so having the baseline be the mfcc 's means that people could choose to pour their ener their effort into trying to do a really good vad professor b: unfortunately there 's coupling between them , which is part of what stephane is getting to , is that you can choose your features in such a way as to improve the vad . and you also can choose your features in such a way as to prove improve recognition . they may not be the same thing . professor b: you should do both and that this still makes i still think this makes sense as a baseline . it 's just saying , as a baseline , we know , we had the mfcc 's before , lots of people have done voice activity detectors , you might as pick some voice activity detector and make that the baseline , just like you picked some version of htk and made that the baseline . professor b: and then let 's try and make everything better . and if one of the ways you make it better is by having your features be better features for the vad then that 's so be it . but , , at least you have a starting point that 's cuz i some of the people did n't have a vad , i . and and then they looked pretty bad and what they were doing was n't so bad . phd c: . it seems like you should try to make your baseline as good as possible . and if it turns out that you ca n't improve on that , , then , nobody wins and you just use mfcc . professor b: , it seems like , it should include the current state of the art that you want are trying to improve , and mfcc 's , or plp it seems like reasonable baseline for the features , and anybody doing this task , is gon na have some voice activity detection at some level , in some way . they might use the whole recognizer to do it but rather than a separate thing , but they 'll have it on some level . phd c: it seems like whatever they choose they should n't , purposefully brain - damage a part of the system to make a worse baseline , or professor b: it was n't that they purposely brain - damaged it . people had n't really thought through about the , the vad issue . professor b: and and then when the proposals actually came in and half of them had v a ds and half of them did n't , and the half that did and the half that did n't did poorly . so it 's phd a: we 'll see what happen with this . and so what happened since , last week is , from ogi , these experiments on putting vad on the baseline . and these experiments also are using , some noise compensation , so spectral subtraction , and putting on - line normalization , just after this . so spectral subtraction , lda filtering , and on - line normalization , so which is similar to the pro proposal - one , but with spectral subtraction in addition , and it seems that on - line normalization does n't help further when you have spectral subtraction . phd c: is this related to the issue that you brought up a couple of meetings ago with the musical tones and ? phd a: i have no idea , because the issue i brought up was with a very simple spectral subtraction approach , and the one that they use at ogi is one from the proposed the aurora prop , proposals , which might be much better . so , . i asked sunil for more information about that , but , i yet . and what 's happened here is that we so we have this new , reference system which use a clean downsampling - upsampling , which use a new filter that 's much shorter and which also cuts the frequency below sixty - four hertz , which was not done on our first proposal . professor b: when you say " we have that " , does sunil have it now , too , phd a: no . because we 're still testing . so we have the result for , just the features and we are currently testing with putting the neural network in the klt . , it seems to improve on the - matched case , but it 's a little bit worse on the mismatch and highly - mismatched when we put the neural network . and with the current weighting it 's sh it will be better because the - matched case is better . professor b: but how much worse since the weighting might change how much worse is it on the other conditions , when you say it 's a little worse ? phd a: - y w when i say it 's worse , it 's not it 's when i , compare proposal - two to proposal - one , so , r , y putting neural network compared to n not having any neural network . , this new system is better , because it has , this sixty - four hertz cut - off , clean downsampling , what else ? , a good vad . we put the good vad . so . , i . i j , pr phd a: mainly because of the sixty - four hertz and the good vad . and then i took this system and , mmm , w , i p we put the old filters also . so we have this good system , with good vad , with the short filter and with the long filter , with the short filter it 's not worse . , is it 's in professor b: but what you 're saying is that when you do these so let me try to understand . when when you do these same improvements to proposal - one , that , on the i things are somewhat better , in proposal - two for the - matched case and somewhat worse for the other two cases . so does , when you say , the th now that these other things are in there , is it the case maybe that the additions of proposal - two over proposal - one are less i m important ? phd a: , but it 's a good thing anyway to have shorter delay . then we tried , to do something like proposal - two but having , e using also msg features . so there is this klt part , which use just the standard features , and then two neura two neural networks . and it does n't seem to help . , however , we just have one result , which is the italian mismatch , so . we have to for that to fill the whole table , professor b: there was a start of some effort on something related to voicing . is that ? phd a: so we try to , find good features that could be used for voicing detection , but it 's still , on the , t phd a: so we would be looking at , the variance of the spectrum of the excitation , professor b: , w what yo what you 're calling the excitation , as i recall , is you 're subtracting the , the mel filter , spectrum from the fft spectrum . phd a: e that 's right . so we have the mel f filter bank , we have the fft , so we just professor b: so it 's not really an excitation , but it 's something that hopefully tells you something about the excitation . phd a: but it 's still so , for unvoiced portion we have something tha that has a mean around o point three , and for voiced portion the mean is o point fifty - nine . but the variance seem quite high . phd a: it seems quite robust to noise , so when we take we draw its parameters across time for a clean sentence and then nois the same noisy sentence , it 's very close . so there are there is this . there could be also the , something like the maximum of the auto - correlation function or which phd c: is this a s a trained system ? or is it a system where you just pick some thresholds ? ho - how does it work ? phd a: right now we just are trying to find some features . and , . hopefully , what we want to have is to put these features in s some , to obtain a statistical model on these features and to or just to use a neural network and hopefully these features w would help phd c: because it seems like what you said about the mean of the voiced and the unvoiced that seemed pretty encouraging . phd c: , y i that i would trust that so much because you 're doing these canonical mappings from timit labellings . really that 's a cartoon picture about what 's voiced and unvoiced . so that could be giving you a lot of variance . i it may be that you 're finding something good and that the variance is artificial because of how you 're getting your truth . professor b: but another way of looking at it might be that , what w we are coming up with feature sets after all . so another way of looking at it is that , the mel cepstru mel spectrum , mel cepstrum , any of these variants , give you the smooth spectrum . it 's the spectral envelope . by going back to the fft , you 're getting something that is more like the raw data . so the question is , what characterization and you 're playing around with this another way of looking at it is what characterization of the difference between the raw data and this smooth version is something that you 're missing that could help ? so , looking at different statistical measures of that difference , coming up with some things and just trying them out and seeing if you add them onto the feature vector does that make things better or worse in noise , where you 're really just i the way i ' m looking at it is not so much you 're trying to f find the best the world 's best voiced - unvoiced , classifier , but it 's more that , , try some different statistical characterizations of that difference back to the raw data m maybe there 's something there that the system can use . phd a: , but ther more obvious is that the the more obvious is that , using the th the fft , you just it gives you just information about if it 's voiced or not voiced , ma mainly , . but so , this is why we started to look by having voiced phonemes professor b: , that 's the rea w what i ' m arguing is that 's , what i ' m arguing is that 's givi you gives you your intuition . but in reality , it 's , there 's all of this overlap and , professor b: and but what i ' m saying is that may be ok , because what you 're really getting is not actually voiced versus unvoiced , both for the fac the reason of the overlap and then , th , structural reasons , like the one that chuck said , that , the data itself is that you 're working with is not perfect . so , what i ' m saying is maybe that 's not a killer because you 're just getting some characterization , one that 's driven by your intuition about voiced - unvoiced certainly , but it 's just some characterization of something back in the almost raw data , rather than the smooth version . and your intuition is driving you towards particular kinds of , statistical characterizations of , what 's missing from the spectral envelope . , you have something about the excitation , and what is it about the excitation , and you 're not getting the excitation anyway , . so i would almost take a , especially if these trainings and are faster , i would almost just take a , a scattershot at a few different ways of look of characterizing that difference and , you could have one of them but and see , which of them helps . phd c: so i is the idea that you 're going to take whatever features you develop and just add them onto the future vector ? or , what 's the use of the voiced - unvoiced detector ? phd a: , no . no , the idea was , i , to use them as features . phd a: , it could be a neural network that does voiced and unvoiced detection , but it could be in the also the big neural network that does phoneme classification . professor b: but each one of the mixture components , you have , variance only , so it 's like you 're just multiplying together these , probabilities from the individual features within each mixture . it seems l professor b: , i know that , people doing some robustness things a ways back were just doing just being gross and just throwing in the fft and actually it was n't so bad . , so it would s and that i it 's got ta hurt you a little bit to not have a spectral , a s a smooth spectral envelope , so there must be something else that you get in return for that , phd c: so how does , maybe i ' m going in too much detail , but how exactly do you make the difference between the fft and the smoothed spectral envelope ? wha - wh i , how is that , ? phd f: , we distend the we have the twenty - three coefficient af after the mel f filter , and we extend these coefficient between the all the frequency range . and i the interpolation i between give for the triang triangular filter , the value of the triangular filter and of this way we obtained this mode this model speech . professor b: so you essentially take the values that th that you get from the triangular filter and extend them to sor like a rectangle , that 's at that m value . phd a: . we have linear interpolation . so we have one point for one energy for each filter bank , phd c: so you end up with a vector that 's the same length as the fft vector ? and then you just , compute differences phd a: and the variance is computed only from , like , two hundred hertz to one to fifteen hundred . phd a: above , it seems that , some voiced sound can have also , like , a noisy part on high frequencies , but , it 's just phd c: so this is , this is comparing an original version of the signal to a smoothed version of the same signal ? professor b: so i so i this is , i you could argue about whether it should be linear interpolation or zeroeth order , at any rate something like this is what you 're feeding your recognizer , typically . professor b: , so the mel cepstrum is the cepstrum of this , spectrum or log spectrum , phd c: so what 's th , what 's the intuition behind this a thing ? i really know the signal - processing enough to understand what is that doing . phd a: what happen if what we have what we would like to have is some spectrum of the excitation signal , phd a: which is for voiced sound ideally a pulse train and for unvoiced it 's something that 's more flat . and the way to do this is that , we have the fft because it 's computed in the system , and we have the mel filter banks , and so if we , like , remove the mel filter bank from the fft , we have something that 's close to the excitation signal . it 's something that 's like a train of p a pulse train for voiced sound and that 's that should be flat for phd c: is this for a voiced segment , this picture ? what does it look like for unvoiced ? phd f: but between the frequency that we are considered for the excitation for the difference and this is the difference . professor b: that 's like fundamental frequency . so , i t , to first order what you 'd what you 're doing , ignore all the details and all the ways which is that these are complete lies . , the , what you 're doing in feature extraction for speech recognition is you have , in your head a simplified production model for speech , in which you have a periodic or aperiodic source that 's driving some filters . phd a: do you have the mean do you have the mean for the auto - correlation ? professor b: , first order for speech recognition , you say " i do n't care about the source " . professor b: and so you just want to find out what the filters are . the filters roughly act like a , a an overall resonant , f some resonances and that th that 's processing excitation . phd f: , no . this is this ? more close . is this ? and this . professor b: so if you look at the spectral envelope , just the very smooth properties of it , you get something closer to that . professor b: and the notion is if you have the full spectrum , with all the little nitty - gritty details , that has the effect of both , and it would be a multiplication in frequency domain so that would be like an addition in log power spectrum domain . and so this is saying , if you really do have that vocal tract envelope , and you subtract that off , what you get is the excitation . and i call that lies because you do n't really have that , you just have some signal - processing trickery to get something that 's smooth . it 's not really what 's happening in the vocal tract so you 're not really getting the vocal excitation . that 's why i was going to the why i was referring to it in a more , conservative way , when i was saying " , it 's the excitation " . but it 's not really the excitation . it 's whatever it is that 's different between professor b: so so , stand standing back from that , you say there 's this very detailed representation . you go to a smooth representation . you go to a smooth representation cuz this typically generalizes better . but whenever you smooth you lose something , so the question is have you lost something you can you use ? , probably you would n't want to go to the extreme of just ta saying " ok , our feature set will be the fft " , cuz we really think we do gain something in robustness from going to something smoother , but maybe there 's something that we missed . so what is it ? and then you go back to the intuition that , you do n't really get the excitation , but you get something related to it . and it and as you can see from those pictures , you do get something that shows some periodicity , in frequency , and also in time . phd c: that 's that 's really neat . so you do n't have one for unvoiced picture ? professor b: but presumably you 'll see something that wo n't have this , , regularity in frequency , in the phd c: and so you said this is pretty doing this thing is pretty robust to noise ? phd a: so if you take this frame , from the noisy utterance and the same frame from the clean utterance phd f: because here the fft is only with two hundred fifty - six point and this is with five hundred twelve . phd a: this is inter interesting also because if we use the standard , frame length of , like , twenty - five milliseconds , what happens is that for low - pitched voiced , because of the frame length , y you do n't really have you do n't clearly see this periodic structure , because of the first lobe of each of the harmonics . phd c: , it 's that time - frequency trade - off thing . , so this i is this the difference here , for that ? phd a: so with a short frame you have only two periods and it 's not enough to have this neat things . professor b: , maybe . , it looks better , but , if you 're actually asking , if you actually j , need to do place along an fft , it may be pushing things . and and , phd c: would you would you wanna do this , difference thing after you do spectral subtraction ? professor b: the spectral subtraction is being done at what level ? is it being done at the level of fft bins or at the level of , mel spectrum ? phd a: no ? it 's on the filter bank , so , probably i i it . phd a: . , that 's all . so we 'll perhaps try to convince ogi people to use the new filters phd a: not yet but i wi i will call them now they are they have more time because they have this , eurospeech deadline is over professor b: and he 's been doing all the talking but these he 's , this is this a bad thing . we 're trying to get , m more female voices in this record as . make sur make carmen talks as . , but has he been talking about what you 're doing also , and ? phd f: , i am doing this . , . i ' m , but that for the recognizer for the meeting recorder that it 's better that i do n't speak . professor b: , we 'll get to , spanish voices sometime , and we do we want to recognize , you too . phd f: after the after , the result for the ti - digits on the meeting record there will be foreigns people . professor b: we like we 're w we are we 're in the , bourlard - hermansky - morgan , frame of mind . , we like high error rates . it 's that way there 's lots of work to do . anything to talk about ? grad d: n , not much is new . so when i talked about what i ' m planning to do last time , i said i was , going to use avendano 's method of , using a transformation , to map from long analysis frames which are used for removing reverberation to short analysis frames for feature calculation . he has a trick for doing that involving viewing the dft as a matrix . , but , , i decided not to do that after all because i realized to use it i 'd need to have these short analysis frames get plugged directly into the feature computation somehow and right now our feature computation is set to up to , take , audio as input , in general . so i decided that i 'll do the reverberation removal on the long analysis windows and then just re - synthesize audio and then send that . grad d: or even if i ' m using our system , i was thinking it might be easier to just re - synthesize the audio , grad d: because then i could just feacalc as is and i would n't have to change the code . professor b: , ok . , it 's certainly in a short - term this just sounds easier . professor b: , longer - term if it 's if it turns out to be useful , one might want to do something else , professor b: , , in other words , you may be putting other kinds of errors in from the re - synthesis process . grad d: but e u from the re - synthesis ? o - ok . i anything about re - synthesis . , how likely do you think that is ? professor b: , it depends what you do . , it 's , but anyway it sounds like a reasonable way to go for a for an initial thing , and we can look at exactly what you end up doing and then figure out if there 's some something that could be hurt by the end part of the process . grad e: , i ' ve been continuing reading . i went off on a little tangent this past week , looking at , , modulation s spectrum , and learning a bit about what , what it is , and , the importance of it in speech recognition . and i found some , neat papers , historical papers from , kanedera , hermansky , and arai . and they did a lot of experiments where th where , they take speech and , e they modify the , they measure the relative importance of having different , portions of the modulation spectrum intact . and they find that the spectrum between one and sixteen hertz in the modulation is , is i m important for speech recognition . professor b: , this goes back to earlier by drullman . and and , the msg features were built up with this notion professor b: but , i , you had brought this up in the context of , targets somehow . but i m i it 's not , they 're not in the same category as , say , a phonetic target or a syllabic target professor b: or a feature . , i see . , that 's what msg does . but but ,
the icsi meeting recorder group at berkeley met to discuss progress on their main project , aurora. they discussed a conference call with project partners , there have been some developments that should help speed up experiments , along with some progress made in the current area they are looking , voiced/unvoiced detection. a number of other members of the group also reported the progress they were making on their work. me018 will mail people with details about changes to their system in order to run code on the ibm linux machine , along with the name of the machine. mn007 is going to try and convince ogi to use his new filters , and enquire as to the setting of a standard for the system. there was a conference call for the aurora project , but no one from icsi was involved. not only are the group unsure what if anything was decided , but project changes are being considered including changing the baseline and improvement weighting , though everyone has their own opinion on these matters. the group needs more female voices in the meeting recorder data , though fn002 does not consider herself very suitable. me018 has been attempting to make experimentation faster by reducing the iterations in the htk training. these will can be reset once the system is finalised. he then wants to try and improve accuracy may increasing the number of gaussian mixtures in the models. mn007 reported ogi's work on spectral subtraction , as well as his testing his new filter. the latency is reduced , and improvement increased on well matched case , though decreased on mis-matched. along with fn002 he has been looking into voiced/unvoiced detection , and they are still looking for features. so far they are using features which approximate important details , and they seem reasonably robust on noise. me026 has decided against using the exact method of reverberation cancellation he previously discussed , because its output does not fit the current system. me006 has been reading on a slight tangent looking at classic work on modulation spectrum , which has inspired some ideas for input.
###dialogue: professor b: it 's april fifth . actually , hynek should be getting back in town shortly if he is n't already . professor b: u u , i meant , this end of the world , is really what i meant , phd c: , i did some experim , just a few more experiments before i had to , go away for the w , that week . phd c: was it last week or whenever ? , so what i was started playing with was the th again , this is the htk back - end . and , i was curious because the way that they train up the models , they go through about four rounds of training . and in the first round they do , it 's three iterations , and for the last three rounds e they do seven iterations of re - estimation in each of those three . and so , that 's part of what takes so long to train the back - end for this . professor b: i ' m , i did n't quite get that . there 's there 's four and there 's seven and i ' m . phd c: maybe i should write it on the board . so , there 's four rounds of training . i g i you could say iterations . the first one is three , then seven , and seven . and what these numbers refer to is the number of times that the , hmm re - estimation is run . it 's this program called h e professor b: but in htk , what 's the difference between , a an inner loop and an outer loop in these iterations ? phd c: ok . so what happens is , at each one of these points , you increase the number of gaussians in the model . phd c: and so , in the final one here , you end up with , for all of the digit words , you end up with , three mixtures per state , in the final thing . so i had done some experiments where i was i want to play with the number of mixtures . phd c: but , , i wanted to first test to see if we actually need to do this many iterations early on . phd c: , i ran a couple of experiments where i reduced that to l to be three , two , five , and i got almost the exact same results . and but it runs much faster . so , m it only took something like , three or four hours to do the full training , phd c: as opposed to wh what , sixteen hours like that ? , it takes you have to do an overnight , the way it is set up now . phd c: so , even we do n't do anything else , doing something like this could allow us to turn experiments around a lot faster . professor b: and then when you have your final thing , do a full one , so it 's phd c: and when you have your final thing , we go back to this . and it 's a real simple change to make . , it 's like one little text file you edit and change those numbers , and you do n't do anything else . phd c: so it 's a very simple change to make and it does n't seem to hurt all that much . phd c: so i , i have to look to see what the exact numbers were . was , like , three , two , five , but i 'll double check . it was over a week ago that i did it , phd c: so i ca n't remember exactly . but , but it 's so much faster . i it makes a big difference . so we could do a lot more experiments and throw a lot more in there . phd c: , the other thing that i did was , i compiled the htk for the linux boxes . so we have this big thing that we got from ibm , which is a five - processor machine . really fast , but it 's running linux . so , you can now run your experiments on that machine and you can run five at a time and it runs , as fast as , , five different machines . i ' ve forgotten now what the name of that machine is but i can send email around about it . and so we ' ve got it now htk 's compiled for both the linux and for , the sparcs . , you have to make that in your dot cshrc , it detects whether you 're running on the linux or a sparc and points to the right executables . and you may not have had that in your dot cshrc before , if you were always just running the sparc . , i can tell you exactly what you need to do to get all of that to work . but it 'll it really increases what we can run on . phd c: so , together with the fact that we ' ve got these faster linux boxes and that it takes less time to do these , we should be able to crank through a lot more experiments . so after i did that , then what i wanted to do was try increasing the number of mixtures , just to see , see how that affects performance . professor b: . , you could do something like keep exactly the same procedure and then add a fifth thing onto it grad e: so at the middle o where the arrows are showing , that 's you 're adding one more mixture per state , or ? phd c: let 's see , it goes from this , try to go it backwards this at this point it 's two mixtures per state . so this just adds one . except that , actually for the silence model , it 's six mixtures per state . , so it goes to two . phd c: . it 's , shoot . i ca n't remember now what happens at that first one . , i have to look it up and see . phd c: there because they start off with , an initial model which is just this global model , and then they split it to the individuals . and so , it may be that 's what 's happening here . i have to look it up and see . i do n't exactly remember . so . that 's it . phd a: there was a conference call this tuesday . i yet the what happened tuesday , but the points that they were supposed to discuss is still , things like the weights , professor b: do who was since we were n't in on it , do who was in from ogi ? was was hynek involved or was it sunil phd a: , so the points were the weights how to weight the different error rates that are obtained from different language and conditions . it 's not clear that they will keep the same weighting . right now it 's a weighting on improvement . some people are arguing that it would be better to have weights on , to combine error rates before computing improvement . , and the fact is that for right now for the english , they have weights they combine error rates , but for the other languages they combine improvement . so it 's not very consistent . the , and so , this is a point . and right now actually there is a thing also , that happens with the current weight is that a very non - significant improvement on the - matched case result in huge differences in the final number . and so , perhaps they will change the weights to phd c: how should that be done ? , it seems like there 's a simple way , this seems like an obvious mistake . professor b: i have n't thought it through , but one would think that each it it 's like if you say what 's the best way to do an average , an arithmetic average or a geometric average ? it depends what you wanna show . each each one is gon na have a different characteristic . so phd c: , it seems like they should do , like , the percentage improvement , rather than the absolute improvement . professor b: , they are doing that . no , that is relative . but the question is , do you average the relative improvements or do you average the error rates and take the relative improvement maybe of that ? and it 's not just a pure average because there are these weightings . it 's a weighted average . phd a: and so when you average the relative improvement it tends to give a lot of , importance to the - matched case because the baseline is already very good and , i it 's phd c: why do n't they not look at improvements but just look at your av your scores ? , figure out how to combine the scores with a weight or whatever , and then give you a score here 's your score . and then they can do the same thing for the baseline system and here 's its score . and then you can look at professor b: , that 's what he 's seeing as one of the things they could do . it 's just when you get all done , that they pro i m i was n't there but they started off this process with the notion that you should be significantly better than the previous standard . and , so they said " how much is significantly better ? what do you ? " and and so they said " , you should have half the errors , " that you had before " . so it 's , but it does seem like i it does seem like it 's more logical to combine them first and then do the phd a: combine error rates and then but there is this still this problem of weights . when when you combine error rate it tends to give more importance to the difficult cases , and some people think that phd a: , they have different , opinions about this . some people think that it 's more important to look at to have ten percent imp relative improvement on - matched case than to have fifty percent on the m mismatched , and other people think that it 's more important to improve a lot on the mismatch and so , bu phd c: it sounds like they do n't really have a good idea about what the final application is gon na be . professor b: , that if you look at the numbers on the more difficult cases , if you really believe that was gon na be the predominant use , none of this would be good enough . nothing anybody 's whereas you with some reasonable error recovery could imagine in the better cases that these systems working . so , the hope would be that it would , it would work for the good cases and , it would have reasonable reas soft degradation as you got to worse and worse conditions . phd c: i what i ' m , i was thinking about it in terms of , if i were building the final product and i was gon na test to see which front - end i 'd i wanted to use , i would try to weight things depending on the exact environment that i was gon na be using the system in . professor b: , no . , it is n't the operating theater . , they don they do n't really know , . , i th phd c: so if they , does n't that suggest the way for them to go ? you assume everything 's equal . , y , you professor b: , one thing to do is to just not rely on a single number to maybe have two or three numbers , and say here 's how much you , you improve the , the relatively clean case and here 's or - matched case , and here 's how much you , professor b: , actually it 's true . , i had forgotten this , but , - matched is not actually clean . what it is just that , u , the training and testing are similar . professor b: i what you would do in practice is you 'd try to get as many , examples of similar as you could , and then , so the argument for that being the more important thing , is that you 're gon na try and do that , but you wanna see how badly it deviates from that when the , it 's a little different . professor b: that 's an ar , that 's an argument for it , but let me give you the opposite argument . the opposite argument is you 're never really gon na have a good sample of all these different things . , are you gon na have w , examples with the windows open , half open , full open ? going seventy , sixty , fifty , forty miles an hour ? on what roads ? with what passing you ? with , i that you could make the opposite argument that the - matched case is a fantasy . so , professor b: that if you look at the - matched case versus the po , the medium and the fo and then the mismatched case , we 're seeing really , really big differences in performance . right ? and and y you would n't like that to be the case . you would n't like that as soon as you step outside , a lot of the cases it 's is professor b: , in these cases , if you go from the , i do n't remember the numbers right off , but if you go from the - matched case to the medium , it 's not an enormous difference in the training - testing situation , and it 's a really big performance drop . , the reference one , this is back old on , on italian , was like six percent error for the - matched and eighteen for the medium - matched and sixty for the for highly - mismatched . and , with these other systems we helped it out quite a bit , but still there 's something like a factor of two between - matched and medium - matched . and so that if what you 're if the goal of this is to come up with robust features , it does mean so you could argue , that the - matched is something you should n't be looking , that the goal is to come up with features that will still give you reasonable performance , with again gentle degregra degradation , even though the testing condition is not the same as the training . so , i could argue strongly that something like the medium mismatch , which is not compl pathological but , what was the medium - mismatch condition again ? phd a: , it 's medium mismatch is everything with the far microphone , but trained on , like , low noisy condition , like low speed and or stopped car and tested on high - speed conditions , like on a highway professor b: but , it 's there 's a mismatch between the car conditions . and that 's , you could argue that 's a pretty realistic situation and , i 'd almost argue for weighting that highest . but the way they have it now , it 's i it 's they they compute the relative improvement first and then average that with a weighting ? and so then the that makes the highly - matched the really big thing . so , u i since they have these three categories , it seems like the reasonable thing to do is to go across the languages and to come up with an improvement for each of those . just say " ok , in the highly - matched case this is what happens , in the m the , this other m medium if this happens , in the highly - mismatched that happens " . you should see , a gentle degradation through that . i . that i gather that in these meetings it 's really tricky to make anything ac make any policy change because everybody has , their own opinion phd a: , so but there is probably a big change that will be made is that the baseline th they want to have a new baseline , perhaps , which is , mfcc but with a voice activity detector . and , some people are pushing to still keep this fifty percent number . so they want to have at least fifty percent improvement on the baseline , but w which would be a much better baseline . and if we look at the result that sunil sent , just putting the vad in the baseline improved , like , more than twenty percent , which would mean then mean that fifty percent on this new baseline is like , more than sixty percent improvement on o e phd a: , they did n't decide yet . i i this was one point of the conference call also , mmm , so i . professor b: , th that would be good . , it 's not that the design of the vad is n't important , but it 's just that it does seem to be i , a lot of work to do a good job on that and as being a lot of work to do a good job on the feature design , so if we can cut down on that maybe we can make some progress . phd a: m but i perhaps i w . , . per - e s someone told that perhaps it 's not fair to do that because the , to make a good vad you do n't have enough to with the features that are the baseline features . you need more features . so you really need to put more in the front - end . so i s professor b: so y so you m s , but , let 's say for ins see , mfcc does n't have anything in it , related to the pitch . so just . so suppose you ' ve that what you really wanna do is put a good pitch detector on there and if it gets an unambiguous professor b: if it gets an unambiguous result then you 're definitely in a voice in a , s region with speech . phd c: so there 's this assumption that the v the voice activity detector can only use the mfcc ? professor b: , for the baseline . so so if you use other features then y but it 's just a question of what is your baseline . what is it that you 're supposed to do better than ? professor b: and so having the baseline be the mfcc 's means that people could choose to pour their ener their effort into trying to do a really good vad professor b: unfortunately there 's coupling between them , which is part of what stephane is getting to , is that you can choose your features in such a way as to improve the vad . and you also can choose your features in such a way as to prove improve recognition . they may not be the same thing . professor b: you should do both and that this still makes i still think this makes sense as a baseline . it 's just saying , as a baseline , we know , we had the mfcc 's before , lots of people have done voice activity detectors , you might as pick some voice activity detector and make that the baseline , just like you picked some version of htk and made that the baseline . professor b: and then let 's try and make everything better . and if one of the ways you make it better is by having your features be better features for the vad then that 's so be it . but , , at least you have a starting point that 's cuz i some of the people did n't have a vad , i . and and then they looked pretty bad and what they were doing was n't so bad . phd c: . it seems like you should try to make your baseline as good as possible . and if it turns out that you ca n't improve on that , , then , nobody wins and you just use mfcc . professor b: , it seems like , it should include the current state of the art that you want are trying to improve , and mfcc 's , or plp it seems like reasonable baseline for the features , and anybody doing this task , is gon na have some voice activity detection at some level , in some way . they might use the whole recognizer to do it but rather than a separate thing , but they 'll have it on some level . phd c: it seems like whatever they choose they should n't , purposefully brain - damage a part of the system to make a worse baseline , or professor b: it was n't that they purposely brain - damaged it . people had n't really thought through about the , the vad issue . professor b: and and then when the proposals actually came in and half of them had v a ds and half of them did n't , and the half that did and the half that did n't did poorly . so it 's phd a: we 'll see what happen with this . and so what happened since , last week is , from ogi , these experiments on putting vad on the baseline . and these experiments also are using , some noise compensation , so spectral subtraction , and putting on - line normalization , just after this . so spectral subtraction , lda filtering , and on - line normalization , so which is similar to the pro proposal - one , but with spectral subtraction in addition , and it seems that on - line normalization does n't help further when you have spectral subtraction . phd c: is this related to the issue that you brought up a couple of meetings ago with the musical tones and ? phd a: i have no idea , because the issue i brought up was with a very simple spectral subtraction approach , and the one that they use at ogi is one from the proposed the aurora prop , proposals , which might be much better . so , . i asked sunil for more information about that , but , i yet . and what 's happened here is that we so we have this new , reference system which use a clean downsampling - upsampling , which use a new filter that 's much shorter and which also cuts the frequency below sixty - four hertz , which was not done on our first proposal . professor b: when you say " we have that " , does sunil have it now , too , phd a: no . because we 're still testing . so we have the result for , just the features and we are currently testing with putting the neural network in the klt . , it seems to improve on the - matched case , but it 's a little bit worse on the mismatch and highly - mismatched when we put the neural network . and with the current weighting it 's sh it will be better because the - matched case is better . professor b: but how much worse since the weighting might change how much worse is it on the other conditions , when you say it 's a little worse ? phd a: - y w when i say it 's worse , it 's not it 's when i , compare proposal - two to proposal - one , so , r , y putting neural network compared to n not having any neural network . , this new system is better , because it has , this sixty - four hertz cut - off , clean downsampling , what else ? , a good vad . we put the good vad . so . , i . i j , pr phd a: mainly because of the sixty - four hertz and the good vad . and then i took this system and , mmm , w , i p we put the old filters also . so we have this good system , with good vad , with the short filter and with the long filter , with the short filter it 's not worse . , is it 's in professor b: but what you 're saying is that when you do these so let me try to understand . when when you do these same improvements to proposal - one , that , on the i things are somewhat better , in proposal - two for the - matched case and somewhat worse for the other two cases . so does , when you say , the th now that these other things are in there , is it the case maybe that the additions of proposal - two over proposal - one are less i m important ? phd a: , but it 's a good thing anyway to have shorter delay . then we tried , to do something like proposal - two but having , e using also msg features . so there is this klt part , which use just the standard features , and then two neura two neural networks . and it does n't seem to help . , however , we just have one result , which is the italian mismatch , so . we have to for that to fill the whole table , professor b: there was a start of some effort on something related to voicing . is that ? phd a: so we try to , find good features that could be used for voicing detection , but it 's still , on the , t phd a: so we would be looking at , the variance of the spectrum of the excitation , professor b: , w what yo what you 're calling the excitation , as i recall , is you 're subtracting the , the mel filter , spectrum from the fft spectrum . phd a: e that 's right . so we have the mel f filter bank , we have the fft , so we just professor b: so it 's not really an excitation , but it 's something that hopefully tells you something about the excitation . phd a: but it 's still so , for unvoiced portion we have something tha that has a mean around o point three , and for voiced portion the mean is o point fifty - nine . but the variance seem quite high . phd a: it seems quite robust to noise , so when we take we draw its parameters across time for a clean sentence and then nois the same noisy sentence , it 's very close . so there are there is this . there could be also the , something like the maximum of the auto - correlation function or which phd c: is this a s a trained system ? or is it a system where you just pick some thresholds ? ho - how does it work ? phd a: right now we just are trying to find some features . and , . hopefully , what we want to have is to put these features in s some , to obtain a statistical model on these features and to or just to use a neural network and hopefully these features w would help phd c: because it seems like what you said about the mean of the voiced and the unvoiced that seemed pretty encouraging . phd c: , y i that i would trust that so much because you 're doing these canonical mappings from timit labellings . really that 's a cartoon picture about what 's voiced and unvoiced . so that could be giving you a lot of variance . i it may be that you 're finding something good and that the variance is artificial because of how you 're getting your truth . professor b: but another way of looking at it might be that , what w we are coming up with feature sets after all . so another way of looking at it is that , the mel cepstru mel spectrum , mel cepstrum , any of these variants , give you the smooth spectrum . it 's the spectral envelope . by going back to the fft , you 're getting something that is more like the raw data . so the question is , what characterization and you 're playing around with this another way of looking at it is what characterization of the difference between the raw data and this smooth version is something that you 're missing that could help ? so , looking at different statistical measures of that difference , coming up with some things and just trying them out and seeing if you add them onto the feature vector does that make things better or worse in noise , where you 're really just i the way i ' m looking at it is not so much you 're trying to f find the best the world 's best voiced - unvoiced , classifier , but it 's more that , , try some different statistical characterizations of that difference back to the raw data m maybe there 's something there that the system can use . phd a: , but ther more obvious is that the the more obvious is that , using the th the fft , you just it gives you just information about if it 's voiced or not voiced , ma mainly , . but so , this is why we started to look by having voiced phonemes professor b: , that 's the rea w what i ' m arguing is that 's , what i ' m arguing is that 's givi you gives you your intuition . but in reality , it 's , there 's all of this overlap and , professor b: and but what i ' m saying is that may be ok , because what you 're really getting is not actually voiced versus unvoiced , both for the fac the reason of the overlap and then , th , structural reasons , like the one that chuck said , that , the data itself is that you 're working with is not perfect . so , what i ' m saying is maybe that 's not a killer because you 're just getting some characterization , one that 's driven by your intuition about voiced - unvoiced certainly , but it 's just some characterization of something back in the almost raw data , rather than the smooth version . and your intuition is driving you towards particular kinds of , statistical characterizations of , what 's missing from the spectral envelope . , you have something about the excitation , and what is it about the excitation , and you 're not getting the excitation anyway , . so i would almost take a , especially if these trainings and are faster , i would almost just take a , a scattershot at a few different ways of look of characterizing that difference and , you could have one of them but and see , which of them helps . phd c: so i is the idea that you 're going to take whatever features you develop and just add them onto the future vector ? or , what 's the use of the voiced - unvoiced detector ? phd a: , no . no , the idea was , i , to use them as features . phd a: , it could be a neural network that does voiced and unvoiced detection , but it could be in the also the big neural network that does phoneme classification . professor b: but each one of the mixture components , you have , variance only , so it 's like you 're just multiplying together these , probabilities from the individual features within each mixture . it seems l professor b: , i know that , people doing some robustness things a ways back were just doing just being gross and just throwing in the fft and actually it was n't so bad . , so it would s and that i it 's got ta hurt you a little bit to not have a spectral , a s a smooth spectral envelope , so there must be something else that you get in return for that , phd c: so how does , maybe i ' m going in too much detail , but how exactly do you make the difference between the fft and the smoothed spectral envelope ? wha - wh i , how is that , ? phd f: , we distend the we have the twenty - three coefficient af after the mel f filter , and we extend these coefficient between the all the frequency range . and i the interpolation i between give for the triang triangular filter , the value of the triangular filter and of this way we obtained this mode this model speech . professor b: so you essentially take the values that th that you get from the triangular filter and extend them to sor like a rectangle , that 's at that m value . phd a: . we have linear interpolation . so we have one point for one energy for each filter bank , phd c: so you end up with a vector that 's the same length as the fft vector ? and then you just , compute differences phd a: and the variance is computed only from , like , two hundred hertz to one to fifteen hundred . phd a: above , it seems that , some voiced sound can have also , like , a noisy part on high frequencies , but , it 's just phd c: so this is , this is comparing an original version of the signal to a smoothed version of the same signal ? professor b: so i so i this is , i you could argue about whether it should be linear interpolation or zeroeth order , at any rate something like this is what you 're feeding your recognizer , typically . professor b: , so the mel cepstrum is the cepstrum of this , spectrum or log spectrum , phd c: so what 's th , what 's the intuition behind this a thing ? i really know the signal - processing enough to understand what is that doing . phd a: what happen if what we have what we would like to have is some spectrum of the excitation signal , phd a: which is for voiced sound ideally a pulse train and for unvoiced it 's something that 's more flat . and the way to do this is that , we have the fft because it 's computed in the system , and we have the mel filter banks , and so if we , like , remove the mel filter bank from the fft , we have something that 's close to the excitation signal . it 's something that 's like a train of p a pulse train for voiced sound and that 's that should be flat for phd c: is this for a voiced segment , this picture ? what does it look like for unvoiced ? phd f: but between the frequency that we are considered for the excitation for the difference and this is the difference . professor b: that 's like fundamental frequency . so , i t , to first order what you 'd what you 're doing , ignore all the details and all the ways which is that these are complete lies . , the , what you 're doing in feature extraction for speech recognition is you have , in your head a simplified production model for speech , in which you have a periodic or aperiodic source that 's driving some filters . phd a: do you have the mean do you have the mean for the auto - correlation ? professor b: , first order for speech recognition , you say " i do n't care about the source " . professor b: and so you just want to find out what the filters are . the filters roughly act like a , a an overall resonant , f some resonances and that th that 's processing excitation . phd f: , no . this is this ? more close . is this ? and this . professor b: so if you look at the spectral envelope , just the very smooth properties of it , you get something closer to that . professor b: and the notion is if you have the full spectrum , with all the little nitty - gritty details , that has the effect of both , and it would be a multiplication in frequency domain so that would be like an addition in log power spectrum domain . and so this is saying , if you really do have that vocal tract envelope , and you subtract that off , what you get is the excitation . and i call that lies because you do n't really have that , you just have some signal - processing trickery to get something that 's smooth . it 's not really what 's happening in the vocal tract so you 're not really getting the vocal excitation . that 's why i was going to the why i was referring to it in a more , conservative way , when i was saying " , it 's the excitation " . but it 's not really the excitation . it 's whatever it is that 's different between professor b: so so , stand standing back from that , you say there 's this very detailed representation . you go to a smooth representation . you go to a smooth representation cuz this typically generalizes better . but whenever you smooth you lose something , so the question is have you lost something you can you use ? , probably you would n't want to go to the extreme of just ta saying " ok , our feature set will be the fft " , cuz we really think we do gain something in robustness from going to something smoother , but maybe there 's something that we missed . so what is it ? and then you go back to the intuition that , you do n't really get the excitation , but you get something related to it . and it and as you can see from those pictures , you do get something that shows some periodicity , in frequency , and also in time . phd c: that 's that 's really neat . so you do n't have one for unvoiced picture ? professor b: but presumably you 'll see something that wo n't have this , , regularity in frequency , in the phd c: and so you said this is pretty doing this thing is pretty robust to noise ? phd a: so if you take this frame , from the noisy utterance and the same frame from the clean utterance phd f: because here the fft is only with two hundred fifty - six point and this is with five hundred twelve . phd a: this is inter interesting also because if we use the standard , frame length of , like , twenty - five milliseconds , what happens is that for low - pitched voiced , because of the frame length , y you do n't really have you do n't clearly see this periodic structure , because of the first lobe of each of the harmonics . phd c: , it 's that time - frequency trade - off thing . , so this i is this the difference here , for that ? phd a: so with a short frame you have only two periods and it 's not enough to have this neat things . professor b: , maybe . , it looks better , but , if you 're actually asking , if you actually j , need to do place along an fft , it may be pushing things . and and , phd c: would you would you wanna do this , difference thing after you do spectral subtraction ? professor b: the spectral subtraction is being done at what level ? is it being done at the level of fft bins or at the level of , mel spectrum ? phd a: no ? it 's on the filter bank , so , probably i i it . phd a: . , that 's all . so we 'll perhaps try to convince ogi people to use the new filters phd a: not yet but i wi i will call them now they are they have more time because they have this , eurospeech deadline is over professor b: and he 's been doing all the talking but these he 's , this is this a bad thing . we 're trying to get , m more female voices in this record as . make sur make carmen talks as . , but has he been talking about what you 're doing also , and ? phd f: , i am doing this . , . i ' m , but that for the recognizer for the meeting recorder that it 's better that i do n't speak . professor b: , we 'll get to , spanish voices sometime , and we do we want to recognize , you too . phd f: after the after , the result for the ti - digits on the meeting record there will be foreigns people . professor b: we like we 're w we are we 're in the , bourlard - hermansky - morgan , frame of mind . , we like high error rates . it 's that way there 's lots of work to do . anything to talk about ? grad d: n , not much is new . so when i talked about what i ' m planning to do last time , i said i was , going to use avendano 's method of , using a transformation , to map from long analysis frames which are used for removing reverberation to short analysis frames for feature calculation . he has a trick for doing that involving viewing the dft as a matrix . , but , , i decided not to do that after all because i realized to use it i 'd need to have these short analysis frames get plugged directly into the feature computation somehow and right now our feature computation is set to up to , take , audio as input , in general . so i decided that i 'll do the reverberation removal on the long analysis windows and then just re - synthesize audio and then send that . grad d: or even if i ' m using our system , i was thinking it might be easier to just re - synthesize the audio , grad d: because then i could just feacalc as is and i would n't have to change the code . professor b: , ok . , it 's certainly in a short - term this just sounds easier . professor b: , longer - term if it 's if it turns out to be useful , one might want to do something else , professor b: , , in other words , you may be putting other kinds of errors in from the re - synthesis process . grad d: but e u from the re - synthesis ? o - ok . i anything about re - synthesis . , how likely do you think that is ? professor b: , it depends what you do . , it 's , but anyway it sounds like a reasonable way to go for a for an initial thing , and we can look at exactly what you end up doing and then figure out if there 's some something that could be hurt by the end part of the process . grad e: , i ' ve been continuing reading . i went off on a little tangent this past week , looking at , , modulation s spectrum , and learning a bit about what , what it is , and , the importance of it in speech recognition . and i found some , neat papers , historical papers from , kanedera , hermansky , and arai . and they did a lot of experiments where th where , they take speech and , e they modify the , they measure the relative importance of having different , portions of the modulation spectrum intact . and they find that the spectrum between one and sixteen hertz in the modulation is , is i m important for speech recognition . professor b: , this goes back to earlier by drullman . and and , the msg features were built up with this notion professor b: but , i , you had brought this up in the context of , targets somehow . but i m i it 's not , they 're not in the same category as , say , a phonetic target or a syllabic target professor b: or a feature . , i see . , that 's what msg does . but but , ###summary: the icsi meeting recorder group at berkeley met to discuss progress on their main project , aurora. they discussed a conference call with project partners , there have been some developments that should help speed up experiments , along with some progress made in the current area they are looking , voiced/unvoiced detection. a number of other members of the group also reported the progress they were making on their work. me018 will mail people with details about changes to their system in order to run code on the ibm linux machine , along with the name of the machine. mn007 is going to try and convince ogi to use his new filters , and enquire as to the setting of a standard for the system. there was a conference call for the aurora project , but no one from icsi was involved. not only are the group unsure what if anything was decided , but project changes are being considered including changing the baseline and improvement weighting , though everyone has their own opinion on these matters. the group needs more female voices in the meeting recorder data , though fn002 does not consider herself very suitable. me018 has been attempting to make experimentation faster by reducing the iterations in the htk training. these will can be reset once the system is finalised. he then wants to try and improve accuracy may increasing the number of gaussian mixtures in the models. mn007 reported ogi's work on spectral subtraction , as well as his testing his new filter. the latency is reduced , and improvement increased on well matched case , though decreased on mis-matched. along with fn002 he has been looking into voiced/unvoiced detection , and they are still looking for features. so far they are using features which approximate important details , and they seem reasonably robust on noise. me026 has decided against using the exact method of reverberation cancellation he previously discussed , because its output does not fit the current system. me006 has been reading on a slight tangent looking at classic work on modulation spectrum , which has inspired some ideas for input.
2
professor c: and hans - , hans - guenter will be here , by next tuesday or so . professor c: and , we 'll see . we might end up with some longer collaboration . so he 's gon na look in on everything we 're doing and give us his thoughts . and so it 'll be another good person looking at things . grad e: is that right ? , ok . so i should probably talk to him a bit too ? professor c: , . no , he 'll be around for three weeks . he 's , , very , easygoing , easy to talk to , and , very interested in everything . phd a: wh - back when i was a grad student he was here for a , a year or n six months . phd a: so , i we got lots to catch up on . and we have n't met for a couple of weeks . we did n't meet last week , morgan . , i went around and talked to everybody , and it seemed like they had some new results but rather than them coming up and telling me i figured we should just a week and they can tell both , all of us . so , why do n't we start with you , dave , and then , we can go on . so . grad e: so , since we 're looking at putting this , mean log m magnitude spectral subtraction , into the smartkom system , i did a test seeing if , it would work using past only and plus the present to calculate the mean . so , i did a test , where i used twelve seconds from the past and the present frame to , calculate the mean . and phd a: twelve seconds twelve twelve seconds back from the current frame , is that what you mean ? grad e: so it was , twen it was twenty - one frames and that worked out to about twelve seconds . grad e: and compared to , do using a twelve second centered window , there was a drop in performance but it was just a slight drop . grad e: - . so that was encouraging . and , that 's encouraging for the idea of using it in an interactive system like and , another issue i ' m thinking about is in the smartkom system . so say twe twelve seconds in the earlier test seemed like a good length of time , but what happens if you have less than twelve seconds ? and , so i w bef before , back in may , i did some experiments using , say , two seconds , or four seconds , or six seconds . in those i trained the models using mean subtraction with the means calculated over two seconds , or four seconds , or six seconds . here , i was curious , what if i trained the models using twelve seconds but i f i gave it a situation where the test set i was subtracted using two seconds , or four seconds , or six seconds . so i did that for about three different conditions . , i th it was , four se i think it was , something like four seconds and , six seconds , and eight seconds . something like that . and it seems like it hurts compared to if you actually train the models using th that same length of time but it does n't hurt that much . , u usually less than point five percent , although i did see one where it was a point eight percent or so rise in word error rate . but this is , w where , even if i train on the , model , and mean subtracted it with the same length of time as in the test , it the word error rate is around , ten percent or nine percent . so it does n't seem like that big a d a difference . professor c: but it but looking at it the other way , is n't it what you 're saying that it did n't help you to have the longer time for training , if you were going to have a short time for professor c: , why would you do it , if you knew that you were going to have short windows in testing . phd a: , it seems like for your , in normal situations you would never get twelve seconds of speech , right ? i ' m not e u phd b: you need twelve seconds in the past to estimate , right ? or l or you 're looking at six sec seconds in future and six in phd a: is this twelve seconds of , regardless of speech or silence ? or twelve seconds of speech ? professor c: the other thing , which maybe relates a little bit to something else we ' ve talked about in terms of windowing and so on is , that , i wonder if you trained with twelve seconds , and then when you were two seconds in you used two seconds , and when you were four seconds in , you used four seconds , and when you were six and you build up to the twelve seconds . so that if you have very long utterances you have the best , but if you have shorter utterances you use what you can . grad e: but s so i g so i the que the question i was trying to get at with those experiments is , " does it matter what models you use ? does it matter how much time y you use to calculate the mean when you were , tra doing the training data ? " professor c: right . but the other thing is that 's , the other way of looking at this , going back to , mean cepstral subtraction versus rasta things , is that you could look at mean cepstral subtraction , especially the way you 're doing it , as being a filter . and so , the other thing is just to design a filter . , you 're doing a high - pass filter or a band - pass filter of some sort and just design a filter . and then , a filter will have a certain behavior and you loo can look at the start up behavior when you start up with nothing . and and , it will , if you have an iir filter , it will , , not behave in the steady - state way that you would like it to behave until you get a long enough period , but , by just constraining yourself to have your filter be only a subtraction of the mean , you 're , tying your hands behind your back because there 's filters have all sorts of be temporal and spectral behaviors . and the only thing , consistent that we know about is that you want to get rid of the very low frequency component . phd b: but do you really want to calculate the mean ? and you neglect all the silence regions or you just use everything that 's twelve seconds , and phd b: , so you really need a lot of speech to estimate the mean of it . grad e: , if i only use six seconds , it still works pretty . i saw in my test before . i was trying twelve seconds cuz that was the best in my test before and that increasing past twelve seconds did n't seem to help . th , i it 's something i need to play with more to decide how to set that up for the smartkom system . like , may maybe if i trained on six seconds it would work better when i only had two seconds or four seconds , and professor c: , if you take this filtering perspective and if you essentially have it build up over time . , if you computed means over two and then over four , and over six , essentially what you 're getting at is a , ramp up of a filter anyway . and so you may just want to think of it as a filter . but , if you do that , then , in practice somebody using the smartkom system , one would think if they 're using it for a while , it means that their first utterance , instead of , getting , a forty percent error rate reduction , they 'll get a , over what , you 'd get without this , , policy , you get thirty percent . and then the second utterance that you give , they get the full , full benefit of it if it 's this ongoing thing . professor c: that 's if somebody 's using a system to ask for directions , they 'll say something first . and and to begin with if it does n't get them quite right , ma m maybe they 'll come back and say , " excuse me ? " or some it should have some policy like that anyway . and and , , in any event they might ask a second question . and it 's not like what he 's doing does n't , improve things . it does improve things , just not as much as he would like . and so , there 's a higher probability of it making an error , in the first utterance . phd a: what would be really is if you could have , this probably users would never like this but if you had could have a system where , before they began to use it they had to introduce themselves , verbally . hi , bro023adialogueact238 565837 566657 a phd s : s -1 0 my name is so - and - so , bro023cdialogueact236 565505 565855 c professor s^bk -1 0 yeah . bro023adialogueact239 566657 567277 a phd s : s -1 0 i ' m from blah - blah . and you could use that initial speech to do all these adaptations and professor c: , the other thing i which , i much about as much as i should about the rest of the system could n't you , if you did a first pass i what , capability we have at the moment for doing second passes on , some little small lattice , or a graph , or confusion network , . but if you did first pass with , the with either without the mean sub subtraction or with a very short time one , and then , once you , actually had the whole utterance in , if you did , the , longer time version then , based on everything that you had , and then at that point only used it to distinguish between , top n , possible utterances , you might it might not take very much time . , i know in the large vocabulary stu , systems , people were evaluating on in the past , some people really pushed everything in to make it in one pass but other people did n't and had multiple passes . and , the argument , against multiple passes was u has often been " but we want to this to be r have a interactive response " . and the counterargument to that which , say , bbn had , was " , but our second responses are second , passes and third passes are really , really fast " . so , if your second pass takes a millisecond who cares ? grad e: s so , the idea of the second pass would be waiting till you have more recorded speech ? or ? professor c: so if it turned out to be a problem , that you did n't have enough speech because you need a longer window to do this processing , then , one tactic is , looking at the larger system and not just at the front - end is to take in , the speech with some simpler mechanism or shorter time mechanism , do the best you can , and come up with some al possible alternates of what might have been said . and , either in the form of an n - best list or in the form of a lattice , or confusion network , or whatever . and then the decoding of that is much , much faster or can be much , much faster if it is n't a big bushy network . and you can decode that now with speech that you ' ve actually processed using this longer time , subtraction . professor c: so , it 's common that people do this thing where they do more things that are more complex or require looking over more time , whatever , in some second pass . , if the second pass is really , really fast , another one i ' ve heard of is in connected digit , going back and l and through backtrace and finding regions that are considered to be a d a digit , but , which have very low energy . so , there 's lots of things you can do in second passes , sorts of levels . anyway , i ' m throwing too many things out . but . phd b: , so , the last two weeks was , like so i ' ve been working on that wiener filtering . and , found that , s single like , do a s normal wiener filtering , like the standard method of wiener filtering . and that does n't actually give me any improvement over like , b it actually improves over the baseline but it 's not like it does n't meet something like fifty percent . so , i ' ve been playing with the v phd b: so , so that 's the improvement is somewhere around , like , thirty percent over the baseline . professor c: no , no , but in combination with our on - line normalization or with the lda ? phd b: no . it actually improves over the baseline of not having a wiener filter in the whole system . like i have an lda f lda plus on - line normalization , and then i plug in the wiener filter in that , phd b: so it improves over not having the wiener filter . so it improves but it does n't take it like be beyond like thirty percent over the baseline . so professor c: but that 's what i ' m confused about , cuz that our system was more like forty percent without the wiener filtering . phd b: , these are not no , it 's the old vad . so my baseline was , nine this is like w the baseline is ninety - five point six eight , and eighty - nine , and professor c: so , if you can do all these in word errors it 's a lot easier actually . professor c: if you do all these in word error rates it 's a lot easier , right ? phd d: the baseline is something similar to a w , the t the baseline that you are talking about is the mfcc baseline , right ? phd b: so the baseline one baseline is mfcc baseline that when i said thirty percent improvement it 's like mfcc baseline . professor c: so so what 's it start on ? the mfcc baseline is what ? is at what level ? phd b: , so i do n't have that number here . ok , i have it here . , it 's the vad plus the baseline actually . i ' m talking about the mfcc plus i do a frame dropping on it . so that 's like the word error rate is like four point three . like ten point seven . phd b: it 's a medium misma ok , . there 's a ma matched , medium mismatched , and a high matched . so i do n't have the like the phd b: it 's like ten point one . still the same . and the high mismatch is like eighteen point five . phd b: , the one is this one is just the baseline plus the , wiener filter plugged into it . phd b: so , with the on - line normalization , the performance was , ten ok , so it 's like four point three . , that 's the ba the ten point , four and twenty point one . that was with on - line normalization and lda . so the h matched has like literally not changed by adding on - line or lda on it . but the , even the medium mismatch is the same . and the high mismatch was improved by twenty percent absolute . professor c: or italian ? and what did so , what was the , , corresponding number , say , for , , the alcatel system ? phd d: it 's three point four , eight point , seven , and , thirteen point seven . phd b: so . , this is the single stage wiener filter , with the noise estimation was based on first ten frames . actually i started with using the vad to estimate the noise and then i found that it works it does n't work for finnish and spanish because the vad endpoints are not good to estimate the noise because it cuts into the speech sometimes , so i end up overestimating the noise and getting a worse result . so it works only for italian by u for using a vad to estimate noise . it works for italian because the vad was trained on italian . so this was , and so this was giving this was like not improving a lot on this baseline of not having the wiener filter on it . so , i ran this with one more stage of wiener filtering on it but the second time , what i did was i estimated the new wiener filter based on the cleaned up speech , and did , smoothing in the frequency to reduce the variance , i have i ' ve observed there are , like , a lot of bumps in the frequency when i do this wiener filtering which is more like a musical noise . and so by adding another stage of wiener filtering , the results on the speechdat - car was like , so , i still do n't have the word error rate . i ' m about it . but the overall improvement was like fifty - six point four six . this was again using ten frames of noise estimate and two stage of wiener filtering . and the rest is like the lda plu and the on - line normalization all remaining the same . so this was , like , compared to , fifty - seven is what you got by using the french telecom system , right ? phd b: so the new wiener filtering schema is like some fifty - six point four six which is like one percent still less than what you got using the french telecom system . professor c: but again , you 're more or less doing what they were doing , right ? phd b: it 's it 's different in a sense like i ' m actually cleaning up the cleaned up spectrum which they 're not doing . they 're d what they 're doing is , they have two stage stages of estimating the wiener filter , but the final filter , what they do is they take it to their time domain by doing an inverse fourier transform . and they filter the original signal using that fil filter , which is like final filter is acting on the input noisy speech rather than on the cleaned up . so this is more like i ' m doing wiener filter twice , but the only thing is that the second time i ' m actually smoothing the filter and then cleaning up the cleaned up spectrum first level . and so that 's what the difference is . and actually i tried it on s the original clean , the original spectrum where , like , i the second time i estimate the filter but actually clean up the noisy speech rather the c s first output of the first stage and that does n't seems to be a giving , that much improvement . i did n't run it for the whole case . and and what i t what i tried was , by using the same thing but , so we actually found that the vad is very , like , crucial . , just by changing the vad itself gives you the a lot of improvement by instead of using the current vad , if you just take up the vad output from the channel zero , when instead of using channel zero and channel one , because that was the p that was the reason why i was not getting a lot of improvement for estimating the noise . so used the channel zero vad to estimate the noise so that it gives me some reliable mar markers for this noise estimation . phd b: so because the channel zero and channel one are like the same speech , but only w , the same endpoints . but the only thing is that the speech is very noisy for channel one , so you can actually use the output of the channel zero for channel one for the vad . , that 's like a cheating method . professor c: so a are they going to pro what are they doing to do , do we know yet ? about as far as what they 're what the rules are going to be and what we can use ? phd d: and what they did finally is to , mmm , not to align the utterances but to perform recognition , only on the close - talking microphone , phd d: and to take the result of the recognition to get the boundaries , of speech . professor c: so it 's not like that 's being done in one place or one time . professor c: that 's that 's just a rule and we 'd you were permitted to do that . is is that it ? professor c: , so they will send files so everybody will have the same boundaries to work with ? phd b: but actually their alignment actually is not seems to be improving in like on all cases . phd d: , i so what happened here is that , the overall improvement that they have with this method so , to be more precise , what they have is , they have these alignments and then they drop the beginning silence and the end silence but they keep , two hundred milliseconds before speech and two hundred after speech . and they keep the speech pauses also . and the overall improvement over the mfcc baseline so , when they just , add this frame dropping in addition it 's r , forty percent , right ? fourteen percent , . phd d: which is , t which is the overall improvement . but in some cases it does n't improve . like , y do you remember which case ? phd b: so by using the endpointed speech , actually it 's worse than the baseline in some instances , which could be due to the word pattern . phd d: and , the other thing also is that fourteen percent is less than what you obtain using a real vad . phd d: so this shows that there is still work , working on the vad is still important . phd a: can i ask just a high level question ? can you just say like one or two sentences about wiener filtering and why are people doing that ? what 's what 's the deal with that ? phd b: ok , so the wiener filter , it 's like you try to minimize , so the basic principle of wiener filter is like you try to minimize the , d , difference between the noisy signal and the clean signal if you have two channels . like let 's say you have a clean t signal and you have an additional channel where what is the noisy signal . and then you try to minimize the error between these two . so that 's the basic principle . and you get you can do that , if you have only a c noisy signal , at a level which you , you w try to estimate the noise from the w assuming that the first few frames are noise or if you have a w voice activity detector , you estimate the noise spectrum . and then you phd b: in , after the speech starts . but that 's not the case in , many of our cases but it works reasonably . phd b: and and then you what you do is you , b fff . so again , write down some of these eq and then you do this , this is the transfer function of the wiener filter , so " sf " is a clean speech spectrum , power spectrum and " n " is the noisy power spectrum . and so this is the transfer function . phd b: and then you multiply your noisy power spectrum with this . you get an estimate of the clean power spectrum . but that you have to estimate the sf from the noisy spectrum , what you have . so you estimate the nf from the initial noise portions and then you subtract that from the current noisy spectrum to get an estimate of the sf . so sometimes that becomes zero because you do you do n't have a true estimate of the noise . so the f filter will have like sometimes zeros in it because some frequency values will be zeroed out because of that . and that creates a lot of discontinuities across the spectrum because @ the filter . so that 's what that was just the first stage of wiener filtering that i tried . professor c: it 's all pretty related , it 's it 's there 's a di there 's a whole class of techniques where you try in some sense to minimize the noise . and it 's typically a mean square sense , uh , i in some way . and , spectral subtraction is , one approach to it . phd a: do people use the wiener filtering in combination with the spectral subtraction typically , or is i are they competing techniques ? phd b: so it 's like i have n't seen anybody using s wiener filter with spectral subtraction . professor c: , in the long run you 're doing the same thing but y but there you make different approximations , and in spectral subtraction , there 's a an estimation factor . professor c: you sometimes will figure out what the noise is and you 'll multiply that noise spectrum times some constant and subtract that rather than and sometimes people even though this really should be in the power domain , sometimes people s work in the magnitude domain because it works better . and , . phd a: so why did you choose , wiener filtering over some other one of these other techniques ? phd b: , the reason was , like , we had this choice of using spectral subtraction , wiener filtering , and there was one more thing which i ' m trying , is this sub space approach . stephane is working on spectral subtraction . so i picked up phd b: y , we just wanted to have a few noise production compensation techniques and then pick some from that pick one . professor c: i m , there 's car - carmen 's working on another , on the vector taylor series . professor c: so they were just trying to cover a bunch of different things with this task and see , what are the issues for each of them . phd b: so one of the things that i tried , like i said , was to remove those zeros in the fri filter by doing some smoothing of the filter . like , you estimate the edge of square and then you do a f smoothing across the frequency so that those zeros get , like , flattened out . and that does n't seems to be improving by trying it on the first time . so what i did was like i p did this and then you i plugged in the one more the same thing but with the smoothed filter the second time . and that seems to be working . so that 's where i got like fifty - six point five percent improvement on speechdat - car with that . so the other thing what i tried was i used still the ten frames of noise estimate but i used this channel zero vad to drop the frames . so i ' m not still not estimating . and that has taken the performance to like sixty - seven percent in speechdat - car , which is which like shows that by using a proper vad you can just take it to further , better levels . phd b: so far i ' ve seen sixty - seven , no , i have n't seen s like sixty - seven percent . and , using the channel zero vad to estimate the noise also seems to be improving but i do n't have the results for all the cases with that . so i used channel zero vad to estimate noise as a lesser 2 x frame , which is like , everywhere i use the channel zero vad . and that seems to be the best combination , rather than using a few frames to estimate and then drop a channel . professor c: so i ' m still a little confused . is that channel zero information going to be accessible during this test . phd b: nnn , no . this is just to test whether we can really improve by using a better vad . so this is like the noise compensation f is fixed but you make a better decision on the endpoints . that 's , like seems to be so we c so , which means , like , by using this technique what we improve just the vad phd b: we can just take the performance by another ten percent or better . so , that was just the , reason for doing that experiment . and , w , but this all these things , i have to still try it on the ti - digits , which is like i ' m just running . and there seems to be not improving a lot on the ti - digits , so i ' m like investigating that , why it 's not . after that . so the other thing is like i ' ve been i ' m doing all this on the power spectrum . tried this on the mel as mel and the magnitude , and mel magnitude , and all those things . but it seems to be the power spectrum seems to be getting the best result . so , one of reasons like doing the averaging , after the filtering using the mel filter bank , that seems to be maybe helping rather than trying it on the mel filter ba filtered outputs . so just th phd b: th that 's the only thing that i could think of why it 's giving improvement on the mel . and , . so that 's it . phd b: subspace , i ' m like that 's still in a little bit in the back burner because i ' ve been p putting a lot effort on this to make it work , on tuning things and other . i was like going parallely but not much of improvement . i ' m just have some skeletons ready , need some more time for it . phd d: , . so , i ' ve been , working still on the spectral subtraction . so to r to remind you a little bit of what i did before , is just to apply some spectral subtraction with an overestimation factor also to get , an estimate of the noise , spectrum , and subtract this estimation of the noise spectrum from the , signal spectrum , but subtracting more when the snr is , low , which is a technique that it 's often used . phd d: so you overestimate the noise spectrum . you multiply the noise spectrum by a factor , which depends on the snr . so , above twenty db , it 's one , so you just subtract the noise . and then it 's b generally , i use , actually , a linear , function of the snr , which is bounded to , like , two or three , when the snr is below zero db . , doing just this , either on the fft bins or on the mel bands , t does n't yield any improvement phd d: o so there is also a threshold , because after subtraction you can have negative energies , and so what do is to put , to add to put the threshold first and then to add a small amount of noise , which right now is speech - shaped . phd d: so it 's a it has the overall energy , pow it has the overall power spectrum of speech . so with a bump around one kilohertz . phd a: so when y when you talk about there being something less than zero after subtracting the noise , is that at a particular frequency bin ? phd a: and so when you say you 're adding something that has the overall shape of speech , is that in a particular frequency bin ? or you 're adding something across all the frequencies when you get these negatives ? phd d: for each frequencies i a i ' m adding some , noise , but the a the amount of noise i add is not the same for all the frequency bins . phd d: . right now i do n't think if it makes sense to add something that 's speech - shaped , because then you have silence portion that have some spectra similar to the sp the overall speech spectra . but so this is something still work on , phd a: i ' m trying to understand what it means when you do the spectral subtraction and you get a negative . it means that at that particular frequency range you subtracted more energy than there was actually phd d: that means that so so , you have an estimation of the noise spectrum , but sometimes , it 's as the noise is not perfectly stationary , sometimes this estimation can be , too small , so you do n't subtract enough . but sometimes it can be too large also . if if the noise , energy in this particular frequency band drops for some reason . phd a: so in an ideal word i world if the noise were always the same , then , when you subtracted it the worst that i you would get would be a zero . , the lowest you would get would be a zero , cuz i if there was no other energy there you 're just subtracting exactly the noise . professor c: , there 's all sorts of , deviations from the ideal here . , you 're talking about the signal and noise , at a particular point . and even if something is stationary in ster terms of statistics , there 's no guarantee that any particular instantiation or piece of it is exactly a particular number or bounded by a particular range . so , you 're figuring out from some chunk of the signal what you think the noise is . then you 're subtracting that from another chunk , and there 's no reason to think that you 'd know that it would n't , be negative in some places . , on the other hand that just means that in some sense you ' ve made a mistake because you certainly have stra subtracted a bigger number than is due to the noise . also , we speak the whole where all this comes from is from an assumption that signal and noise are uncorrelated . and that certainly makes sense in s in a statistical interpretation , that , over , all possible realizations that they 're uncorrelated or assuming , ergodicity that i , across time , it 's uncorrelated . but if you just look at a quarter second , and you cross - multiply the two things , you could very , end up with something that sums to something that 's not zero . so , the two signals could have some relation to one another . and so there 's all sorts of deviations from ideal in this . and and given all that , you could definitely end up with something that 's negative . but if down the road you 're making use of something as if it is a power spectrum , then it can be bad to have something negative . now , the other thing i wonder about actually is , what if you left it negative ? what happens ? professor c: , because , are you taking the log before you add them up to the mel ? professor c: so , i wonder how if you put your thresholds after that , i wonder how often you would end up with , with negative values . phd b: but you will but you end up reducing some neighboring frequency bins @ in the average , right ? when you add the negative to the positive value which is the true estimate . professor c: but nonetheless , , these are it 's another f smoothing , right ? that you 're doing . so , you ' ve done your best shot at figuring out what the noise should be , and now i then you ' ve subtracted it off . and then after that , instead of , leaving it as is and adding things adding up some neighbors , you artificially push it up . which is , it 's there 's no particular reason that 's the right thing to do either , so , i , what you 'd be doing is saying , " , we 're d we 're going to definitely diminish the effect of this frequency in this little frequency bin in the overall mel summation " . it 's just a thought . i d i if it would be phd a: the opposite of that would be if you find out you 're going to get a negative number , you do n't do the subtraction for that bin . phd a: that would be almost the opposite , instead of leaving it negative , you do n't do it . if your if your subtraction 's going to result in a negative number , you do n't do subtraction in that . professor c: but that means that in a situation where you thought that the bin was almost entirely noise , you left it . phd d: and , some people also if it 's a negative value they , re - compute it using inter interpolation from the edges and bins . professor c: people can also , reflect it back up and essentially do a full wave rectification instead of a instead of half wave . but it was just a thought that it might be something to try . phd d: , actually i tried , something else based on this , is to put some smoothing , because it seems to help or it seems to help the wiener filtering and , mmm so what i did is , some nonlinear smoothing . actually i have a recursion that computes , let me go back a little bit . actually , when you do spectral subtraction you can , find this equivalent in the s in the spectral domain . you can compute , y you can say that d your spectral subtraction is a filter , and the gain of this filter is the , signal energy minus what you subtract , divided by the signal energy . and this is a gain that varies over time , and , , depending on the s on the noise spectrum and on the speech spectrum . what happen actually is that during low snr values , the gain is close to zero but it varies a lot . mmm , and this is the of musical noise and all these the fact you we go below zero one frame and then you can have an energy that 's above zero . so the smoothing is i did a smoothing actually on this gain , trajectory . but it 's the smoothing is nonlinear in the sense that i tried to not smooth if the gain is high , because in this case we know that , the estimate of the gain is correct because we are not close to zero , and to do more smoothing if the gain is low . so , that 's this idea , and it seems to give pretty good results , although i ' ve just tested on italian and finnish . and on italian it seems my result seems to be a little bit better than the wiener filtering , phd d: , i if you have these improvement the detailed improvements for italian , finnish , and spanish there professor c: so these numbers he was giving before with the four point three , and the ten point one , and , those were italian , right ? phd b: so so , no , i actually did n't give you the number which is the final one , phd b: which is , after two stages of wiener filtering . , that was , like the overall improvement is like fifty - six point five . , his number is still better than what i got in the two stages of wiener filtering . professor c: but do you have numbers in terms of word error rates on italian ? so just so you have some sense of reference ? phd d: and then , d , nine point , one . and finally , sixteen point five . phd d: plus plus nonlinear smoothing . , it 's the system it 's exactly the sys the same system as sunil tried , phd a: what is it the , france telecom system uses for do they use spectral subtraction , or wiener filtering , or ? phd b: , filtering . , it 's not exactly wiener filtering but some variant of wiener filtering . phd b: s they have like th the just noise compensation technique is a variant of wiener filtering , plus they do some smoothing techniques on the final filter . the th they actually do the filtering in the time domain . so they would take this hf squared back , taking inverse fourier transform . and they convolve the time domain signal with that . phd d: one in the time domain and one in the frequency domain by just taking the first , coefficients of the impulse response . phd d: because you have also two smoothing . one in the time domain , and one in the frequency domain , phd a: does the smoothing in the time domain help , do you get this musical noise with wiener filtering or is that only with , spectral subtraction ? phd a: does the smoothing in the time domain help with that ? or some other smoothing ? professor c: , it 's not clear that these musical noises hurt us in recognition . we if they do . , they sound bad . phd d: , actually the smoothing that i did do here reduced the musical noise . , it phd d: , i can not you can not hear beca , actually what i d did not say is that this is not in the fft bins . this is in the mel frequency bands . it could be seen as a f a smoothing in the frequency domain because i used , in ad mel bands in addition and then the other phase of smoothing in the time domain . but , when you look at the spectrogram , if you do n't have an any smoothing , you clearly see , like in silence portions , and at the beginning and end of speech , you see spots of high energy randomly distributed over the spectrogram . phd d: which is musical noise , if it if you listen to it , if you do this in the fft bins , then you have spots of energy randomly distributing . and if you f if you re - synthesize these spot sounds as , like , sounds , professor c: , none of these systems , have , y you both are working with , our system that does not have the neural net , so one would hope , presumably , that the neural net part of it would improve things further as they did before . phd d: although if we , look at the result from the proposals , one of the reason , the n system with the neural net was , more than , around five percent better , is that it was much better on highly mismatched condition . i ' m thinking , on the ti - digits trained on clean speech and tested on noisy speech . , for this case , the system with the neural net was much better . but not much on the in the other cases . if we have no , spectral subtraction or wiener filtering , i the system is , we thought the neural network is much better than before , even in these cases of high mismatch . so , maybe the neural net will help less but , professor c: it could do a nonlinear spectral subtraction but i if it , you have to figure out what your targets are . phd a: i was thinking if you had a clean version of the signal and a noisy version , and your targets were the m f - , whatever , frequency bins professor c: , that 's not so much spectral subtraction then , but it 's but at any rate , people , professor c: y , we had visitors here who did that when you were here ba way back when . people d done lots of experimentation over the years with training neural nets . and it 's not a bad thing to do . it 's another approach . m , it 's it , the objection everyone always raises , which has some truth to it is that , it 's good for mapping from a particular noise to clean but then you get a different noise . and the experiments we saw that visitors did here showed that it there was at least some , gentleness to the degradation when you switched to different noises . it did seem to help . so that you 're right , that 's another way to go . phd a: how did it compare on , for good cases where it , that it was trained on ? did it do pretty ? professor c: , it did very . but to some extent that 's what we 're doing . , we 're not doing exactly that , we 're not trying to generate good examples but by trying to do the best classifier you possibly can , for these little phonetic categories , professor c: it 's it 's built into that . and and that 's why we have found that it does help . so , , we 'll just have to try it . but i would imagine that it will help some . , it we 'll just have to see whether it helps more or less the same , but i would imagine it would help some . so in any event , all of this i was just confirming that all of this was with a simpler system . ok ? phd d: , so this is th the , actually , this was the first try with this spectral subtraction plus smoothing , and i was excited by the result . , then i started to optimize the different parameters . and , the first thing i tried to optimize is the , time constant of the smoothing . and it seems that the one that i chose for the first experiment was the optimal one , so , phd d: so this is the first thing . , another thing that i it 's important to mention is , that this has a this has some additional latency . because when i do the smoothing , it 's a recursion that estimated the means , so of the g of the gain curve . this is a filter that has some latency . and i noticed that it 's better if we take into account this latency . so , instead o of using the current estimated mean to , subtract the current frame , it 's better to use an estimate that 's some somewhere in the future . phd b: you mean , the m the mean is computed o based on some frames in the future also ? or or no ? phd d: it 's the recursion , so it 's the center recursion , and the latency of this recursion is around fifty milliseconds . phd d: the mean estimation has some delay , the filter that estimates the mean has a time constant . professor c: worse . it 's depending on how all this comes out we may or may not be able to add any latency . phd d: , but so , it depends . , y actually , it 's l it 's three percent . b but i do n't think we have to worry too much on that right now while you kno . professor c: s , the only thing is that i would worry about it a little . because if we completely ignore latency , and then we discover that we really have to do something about it , we 're going to be find ourselves in a bind . , maybe you could make it twenty - five . what ? , just be a little conservative professor c: because we may end up with this crunch where all of a sudden we have to cut the latency in half . phd d: s , there are other things in the , algorithm that i did n't , @ a lot yet , which phd a: a quick question just about the latency thing . if if there 's another part of the system that causes a latency of a hundred milliseconds , is this an additive thing ? or c or is yours hidden in that ? phd b: we can do something in parallel also , in some like some cases like , if you wanted to do voice activity detection . and we can do that in parallel with some other filtering you can do . so you can make a decision on that voice activity detection and then you decide whether you want to filter or not . but by then you already have the sufficient samples to do the filtering . so , sometimes you can do it anyway . phd a: , could n't , i could n't you just also , i if that the l the largest latency in the system is two hundred milliseconds , do n't you could n't you just buffer up that number of frames and then everything uses that buffer ? and that way it 's not additive ? professor c: , everything is sent over in buffers cuz of is n't it the tcp buffer some ? phd b: you mean , the data , the super frame ? , but that has a variable latency because the last frame does n't have any latency and first frame has a twenty framed latency . so you ca n't r rely on that latency all the time . because the transmission over the air interface is like a buffer . twenty frame twenty four frames . but the only thing is that the first frame in that twenty - four frame buffer has a twenty - four frame latency . and the last frame does n't have any latency . because it just goes as phd a: , i was n't thinking of that one in particular but more of , if there is some part of your system that has to buffer twenty frames , ca n't the other parts of the system draw out of that buffer and therefore not add to the latency ? professor c: and and that 's one of the all of that is things that they 're debating in their standards committee . phd d: there is , these parameters that i still have to look at . like , i played a little bit with this overestimation factor , but i still have to look more at this , at the level of noise i add after . , i know that adding noise helped , the system just using spectral subtraction without smoothing , but i right now if it 's still important or not , and if the level i choose before is still the right one . same thing for the shape of the noise . maybe it would be better to add just white noise instead of speech shaped noise . phd d: , and another thing is to for this use as noise estimate the mean , spectrum of the first twenty frames of each utterance . i do n't remember for this experiment what did you use for these two stage phd b: , the reason was like in ti - digits i do n't have a lot . i had twenty frames most of the time . phd d: but , so what 's this result you told me about , the fact that if you use more than ten frames you can improve by t phd b: , that 's using the channel zero . if i use a channel zero vad to estimate the noise . phd d: ok . but in this experiment i did i did n't use any vad . used the twenty first frame to estimate the noise . and so i expected it to be a little bit better , if , i use more frames . ok , that 's it for spectral subtraction . the second thing i was working on is to , try to look at noise estimation , mmm , and using some technique that does n't need voice activity detection . and for this i u simply used some code that , i had from belgium , which is technique that , takes a bunch of frame , and for each frequency bands of this frame , takes a look at the minima of the energy . and then average these minima and take this as an energy estimate of the noise for this particular frequency band . and there is something more to this actually . what is done is that , these minima are computed , based on , high resolution spectra . so , i compute an fft based on the long , signal frame which is sixty - four millisecond phd d: what what i d , i do actually , is to take a bunch of to take a tile on the spectrogram and this tile is five hundred milliseconds long and two hundred hertz wide . and this tile , in this tile appears , like , the harmonics if you have a voiced sound , because it 's the ftt bins . and when you take the m the minima of these this tile , when you do n't have speech , these minima will give you some noise level estimate , if you have voiced speech , these minima will still give you some noise estimate because the minima are between the harmonics . and if you have other speech sounds then it 's not the case , but if the time frame is long enough , like s five hundred milliseconds seems to be long enough , you still have portions which , are very close whi which minima are very close to the noise energy . phd d: sixty - four milliseconds is to compute the fft , bins . the the fft . actually it 's better to use sixty - four milliseconds because , if you use thirty milliseconds , then , because of the this short windowing and at low pitch , sounds , the harmonics are not , wha , correctly separated . so if you take these minima , it b they will overestimate the noise a lot . professor c: so you take sixty - four millisecond f ts and then you average them over five hundred ? , what do you do over five hundred ? phd d: so i take to i take a bunch of these sixty - four millisecond frame to cover five hundred milliseconds , and then i look for the minima , phd d: on the bunch of fifty frames , right ? so the interest of this is that , as y with this technique you can estimate u some reasonable noise spectra with only five hundred milliseconds of signal , so if the n the noise varies a lot , you can track better track the noise , which is not the case if you rely on the voice activity detector . so even if there are no speech pauses , you can track the noise level . the only requirement is that you must have , in these five hundred milliseconds segment , you must have voiced sound at least . cuz this these will help you to track the noise level . so what i did is just to simply replace the vad - based , noise estimate by this estimate , first on speechdat - car , only on speechdat - car actually . and it 's , slightly worse , like one percent relative compared to the vad - based estimates . the reason why it 's not better , is that the speechdat - car noises are all stationary . so , u y there really is no need to have something that 's adaptive , they are mainly stationary . but , i expect s maybe some improvement on ti - digits because , nnn , in this case the noises are all sometimes very variable . , so i have to test it . professor c: but are you comparing with something e i ' m p s a little confused again , i it , when you compare it with the v a d - based , vad - is this is this the ? phd d: it 's the france - telecom - based spectra , s , wiener filtering and vad . so it 's their system but just i replace their noise estimate by this one . phd d: in i i ' m not no , no . it 's our system but with just the wiener filtering from their system . right ? actually , th the best system that we still have is , our system but with their noise compensation scheme , phd d: so i ' m trying to improve on this , and by replacing their noise estimate by , something that might be better . professor c: but the spectral subtraction scheme that you reported on also re requires a noise estimate . could n't you try this for that ? phd d: and i was working on one and the other . , for i will . try also , mmm , the spectral subtraction . phd b: so i ' m also using that n new noise estimate technique on this wiener filtering what i ' m trying . so i have , like , some experiments running , i do n't have the results . i do n't estimate the f noise on the ten frames but use his estimate . phd d: i , also implemented a sp spectral whitening idea which is in the , ericsson proposal . , the idea is just to , flatten the log , spectrum , and to flatten it more if the probability of silence is higher . so in this way , you can also reduce somewhat reduce the musical noise and you reduce the variability if you have different noise shapes , because the spectrum becomes more flat in the silence portions . with this , no improvement , but there are a lot of parameters that we can play with actually , this could be seen as a soft version of the frame dropping because , you could just put the threshold and say that " below the threshold , i will flatten comp completely flatten the spectrum " . and above this threshold , keep the same spectrum . so it would be like frame dropping , because during the silence portions which are below the threshold of voice activity probability , w you would have some dummy frame which is a perfectly flat spectrum . and this , whitening is something that 's more soft because , you whiten you just , have a function the whitening is a function of the speech probability , so it 's not a hard decision . so maybe it can be used together with frame dropping and when we are not about if it 's speech or silence , maybe it has something do with this . professor c: it 's interesting . , , in jrasta we were essentially adding in , white noise dependent on our estimate of the noise . on the overall estimate of the noise . , it never occurred to us to use a probability in there . you could imagine one that made use of where the amount that you added in was , a function of the probability of it being s speech or noise . phd d: , w , right now it 's a constant that just depending on the noise spectrum . professor c: cuz that brings in powers of classifiers that we do n't really have in , this other estimate . so it could be interesting . what what point does the , system stop recording ? how much phd d: so . , so there are with this technique there are some did something exactly the same as the ericsson proposal but , the probability of speech is not computed the same way . and , i for , for a lot of things , actually a g a good speech probability is important . like for frame dropping you improve , like you can improve from ten percent as sunil showed , if you use the channel zero speech probabilities . for this it might help , s so , . the next thing i started to do is to , try to develop a better voice activity detector . i d , for this we can maybe try to train the neural network for voice activity detection on all the data that we have , including all the speechdat - car data . and so i ' m starting to obtain alignments on these databases . , and the way i mi i do that is that use the htk system but i train it only on the close - talking microphone . and then i aligned i obtained the viterbi alignment of the training utterances . it seems to be , actually what i observed is that for italian it does n't seem th - there seems to be a problem . phd b: so , it does n't seems to help by their use of channel zero or channel one . phd d: the c the current vad that we have was trained on , t spine , right ? phd d: italian , and ti - digits with noise and and it seems to work on italian but not on the finnish and spanish data . so , maybe one reason is that s finnish and spanish noise are different . actually we observed we listened to some of the utterances and sometimes for finnish there is music in the recordings and strange things , so the idea was to train all the databases and obtain an alignment to train on these databases , and , also to , try different features , as input to the vad network . we came up with a bunch of features that we want to try like , the spectral slope , the degree o degree of voicing with the features that , we started to develop with carmen , e with , the correlation between bands and different features , and . professor c: , hans - guenter will be here next week so he 'll be interested in all of these things . and , so . mmm .
the icsi meeting recorder group of berkeley met for the first time in two weeks. group members reported their progress in the areas of spectral subtraction , wiener filtering and noise estimation. they also discusses topics relating to the rules and preferences of the project they are working on , including single vs multiple passes. a number of the group also took time to explain the basics of their approaches to the group. there are hopes that a visitor coming for three weeks , may lead to a longer term collaboration. the visitor works on spectral subtraction , so speaker me026 will make sure he talks to him. speaker mn007 agreed , at me013's suggestion , to try his noise compensation scheme in compensation with the prior work on spectral subtraction. in implementing smoothing to the spectral subtraction , latency has been increased; while some feel this is nothing to worry about , others feel it is better to worry now , in case it turns out to be something to worry about. speaker me026 has been experimenting with spectral subtraction using different data window sizes. one possible idea is to use increasing windows as more data becomes available. speaker mn049 has been working on wiener filtering , and testing with just the base system provides 30% improvement. using a second stage of filtering led to even more improvement. speaker mn007 is working on spectral subtraction , still with minimal results. smoothing seems to help , and implementing alongside the neural net should also be positive. he has also been working on noise estimation with an energy minima approach that does not require the voice activity detector.
###dialogue: professor c: and hans - , hans - guenter will be here , by next tuesday or so . professor c: and , we 'll see . we might end up with some longer collaboration . so he 's gon na look in on everything we 're doing and give us his thoughts . and so it 'll be another good person looking at things . grad e: is that right ? , ok . so i should probably talk to him a bit too ? professor c: , . no , he 'll be around for three weeks . he 's , , very , easygoing , easy to talk to , and , very interested in everything . phd a: wh - back when i was a grad student he was here for a , a year or n six months . phd a: so , i we got lots to catch up on . and we have n't met for a couple of weeks . we did n't meet last week , morgan . , i went around and talked to everybody , and it seemed like they had some new results but rather than them coming up and telling me i figured we should just a week and they can tell both , all of us . so , why do n't we start with you , dave , and then , we can go on . so . grad e: so , since we 're looking at putting this , mean log m magnitude spectral subtraction , into the smartkom system , i did a test seeing if , it would work using past only and plus the present to calculate the mean . so , i did a test , where i used twelve seconds from the past and the present frame to , calculate the mean . and phd a: twelve seconds twelve twelve seconds back from the current frame , is that what you mean ? grad e: so it was , twen it was twenty - one frames and that worked out to about twelve seconds . grad e: and compared to , do using a twelve second centered window , there was a drop in performance but it was just a slight drop . grad e: - . so that was encouraging . and , that 's encouraging for the idea of using it in an interactive system like and , another issue i ' m thinking about is in the smartkom system . so say twe twelve seconds in the earlier test seemed like a good length of time , but what happens if you have less than twelve seconds ? and , so i w bef before , back in may , i did some experiments using , say , two seconds , or four seconds , or six seconds . in those i trained the models using mean subtraction with the means calculated over two seconds , or four seconds , or six seconds . here , i was curious , what if i trained the models using twelve seconds but i f i gave it a situation where the test set i was subtracted using two seconds , or four seconds , or six seconds . so i did that for about three different conditions . , i th it was , four se i think it was , something like four seconds and , six seconds , and eight seconds . something like that . and it seems like it hurts compared to if you actually train the models using th that same length of time but it does n't hurt that much . , u usually less than point five percent , although i did see one where it was a point eight percent or so rise in word error rate . but this is , w where , even if i train on the , model , and mean subtracted it with the same length of time as in the test , it the word error rate is around , ten percent or nine percent . so it does n't seem like that big a d a difference . professor c: but it but looking at it the other way , is n't it what you 're saying that it did n't help you to have the longer time for training , if you were going to have a short time for professor c: , why would you do it , if you knew that you were going to have short windows in testing . phd a: , it seems like for your , in normal situations you would never get twelve seconds of speech , right ? i ' m not e u phd b: you need twelve seconds in the past to estimate , right ? or l or you 're looking at six sec seconds in future and six in phd a: is this twelve seconds of , regardless of speech or silence ? or twelve seconds of speech ? professor c: the other thing , which maybe relates a little bit to something else we ' ve talked about in terms of windowing and so on is , that , i wonder if you trained with twelve seconds , and then when you were two seconds in you used two seconds , and when you were four seconds in , you used four seconds , and when you were six and you build up to the twelve seconds . so that if you have very long utterances you have the best , but if you have shorter utterances you use what you can . grad e: but s so i g so i the que the question i was trying to get at with those experiments is , " does it matter what models you use ? does it matter how much time y you use to calculate the mean when you were , tra doing the training data ? " professor c: right . but the other thing is that 's , the other way of looking at this , going back to , mean cepstral subtraction versus rasta things , is that you could look at mean cepstral subtraction , especially the way you 're doing it , as being a filter . and so , the other thing is just to design a filter . , you 're doing a high - pass filter or a band - pass filter of some sort and just design a filter . and then , a filter will have a certain behavior and you loo can look at the start up behavior when you start up with nothing . and and , it will , if you have an iir filter , it will , , not behave in the steady - state way that you would like it to behave until you get a long enough period , but , by just constraining yourself to have your filter be only a subtraction of the mean , you 're , tying your hands behind your back because there 's filters have all sorts of be temporal and spectral behaviors . and the only thing , consistent that we know about is that you want to get rid of the very low frequency component . phd b: but do you really want to calculate the mean ? and you neglect all the silence regions or you just use everything that 's twelve seconds , and phd b: , so you really need a lot of speech to estimate the mean of it . grad e: , if i only use six seconds , it still works pretty . i saw in my test before . i was trying twelve seconds cuz that was the best in my test before and that increasing past twelve seconds did n't seem to help . th , i it 's something i need to play with more to decide how to set that up for the smartkom system . like , may maybe if i trained on six seconds it would work better when i only had two seconds or four seconds , and professor c: , if you take this filtering perspective and if you essentially have it build up over time . , if you computed means over two and then over four , and over six , essentially what you 're getting at is a , ramp up of a filter anyway . and so you may just want to think of it as a filter . but , if you do that , then , in practice somebody using the smartkom system , one would think if they 're using it for a while , it means that their first utterance , instead of , getting , a forty percent error rate reduction , they 'll get a , over what , you 'd get without this , , policy , you get thirty percent . and then the second utterance that you give , they get the full , full benefit of it if it 's this ongoing thing . professor c: that 's if somebody 's using a system to ask for directions , they 'll say something first . and and to begin with if it does n't get them quite right , ma m maybe they 'll come back and say , " excuse me ? " or some it should have some policy like that anyway . and and , , in any event they might ask a second question . and it 's not like what he 's doing does n't , improve things . it does improve things , just not as much as he would like . and so , there 's a higher probability of it making an error , in the first utterance . phd a: what would be really is if you could have , this probably users would never like this but if you had could have a system where , before they began to use it they had to introduce themselves , verbally . hi , bro023adialogueact238 565837 566657 a phd s : s -1 0 my name is so - and - so , bro023cdialogueact236 565505 565855 c professor s^bk -1 0 yeah . bro023adialogueact239 566657 567277 a phd s : s -1 0 i ' m from blah - blah . and you could use that initial speech to do all these adaptations and professor c: , the other thing i which , i much about as much as i should about the rest of the system could n't you , if you did a first pass i what , capability we have at the moment for doing second passes on , some little small lattice , or a graph , or confusion network , . but if you did first pass with , the with either without the mean sub subtraction or with a very short time one , and then , once you , actually had the whole utterance in , if you did , the , longer time version then , based on everything that you had , and then at that point only used it to distinguish between , top n , possible utterances , you might it might not take very much time . , i know in the large vocabulary stu , systems , people were evaluating on in the past , some people really pushed everything in to make it in one pass but other people did n't and had multiple passes . and , the argument , against multiple passes was u has often been " but we want to this to be r have a interactive response " . and the counterargument to that which , say , bbn had , was " , but our second responses are second , passes and third passes are really , really fast " . so , if your second pass takes a millisecond who cares ? grad e: s so , the idea of the second pass would be waiting till you have more recorded speech ? or ? professor c: so if it turned out to be a problem , that you did n't have enough speech because you need a longer window to do this processing , then , one tactic is , looking at the larger system and not just at the front - end is to take in , the speech with some simpler mechanism or shorter time mechanism , do the best you can , and come up with some al possible alternates of what might have been said . and , either in the form of an n - best list or in the form of a lattice , or confusion network , or whatever . and then the decoding of that is much , much faster or can be much , much faster if it is n't a big bushy network . and you can decode that now with speech that you ' ve actually processed using this longer time , subtraction . professor c: so , it 's common that people do this thing where they do more things that are more complex or require looking over more time , whatever , in some second pass . , if the second pass is really , really fast , another one i ' ve heard of is in connected digit , going back and l and through backtrace and finding regions that are considered to be a d a digit , but , which have very low energy . so , there 's lots of things you can do in second passes , sorts of levels . anyway , i ' m throwing too many things out . but . phd b: , so , the last two weeks was , like so i ' ve been working on that wiener filtering . and , found that , s single like , do a s normal wiener filtering , like the standard method of wiener filtering . and that does n't actually give me any improvement over like , b it actually improves over the baseline but it 's not like it does n't meet something like fifty percent . so , i ' ve been playing with the v phd b: so , so that 's the improvement is somewhere around , like , thirty percent over the baseline . professor c: no , no , but in combination with our on - line normalization or with the lda ? phd b: no . it actually improves over the baseline of not having a wiener filter in the whole system . like i have an lda f lda plus on - line normalization , and then i plug in the wiener filter in that , phd b: so it improves over not having the wiener filter . so it improves but it does n't take it like be beyond like thirty percent over the baseline . so professor c: but that 's what i ' m confused about , cuz that our system was more like forty percent without the wiener filtering . phd b: , these are not no , it 's the old vad . so my baseline was , nine this is like w the baseline is ninety - five point six eight , and eighty - nine , and professor c: so , if you can do all these in word errors it 's a lot easier actually . professor c: if you do all these in word error rates it 's a lot easier , right ? phd d: the baseline is something similar to a w , the t the baseline that you are talking about is the mfcc baseline , right ? phd b: so the baseline one baseline is mfcc baseline that when i said thirty percent improvement it 's like mfcc baseline . professor c: so so what 's it start on ? the mfcc baseline is what ? is at what level ? phd b: , so i do n't have that number here . ok , i have it here . , it 's the vad plus the baseline actually . i ' m talking about the mfcc plus i do a frame dropping on it . so that 's like the word error rate is like four point three . like ten point seven . phd b: it 's a medium misma ok , . there 's a ma matched , medium mismatched , and a high matched . so i do n't have the like the phd b: it 's like ten point one . still the same . and the high mismatch is like eighteen point five . phd b: , the one is this one is just the baseline plus the , wiener filter plugged into it . phd b: so , with the on - line normalization , the performance was , ten ok , so it 's like four point three . , that 's the ba the ten point , four and twenty point one . that was with on - line normalization and lda . so the h matched has like literally not changed by adding on - line or lda on it . but the , even the medium mismatch is the same . and the high mismatch was improved by twenty percent absolute . professor c: or italian ? and what did so , what was the , , corresponding number , say , for , , the alcatel system ? phd d: it 's three point four , eight point , seven , and , thirteen point seven . phd b: so . , this is the single stage wiener filter , with the noise estimation was based on first ten frames . actually i started with using the vad to estimate the noise and then i found that it works it does n't work for finnish and spanish because the vad endpoints are not good to estimate the noise because it cuts into the speech sometimes , so i end up overestimating the noise and getting a worse result . so it works only for italian by u for using a vad to estimate noise . it works for italian because the vad was trained on italian . so this was , and so this was giving this was like not improving a lot on this baseline of not having the wiener filter on it . so , i ran this with one more stage of wiener filtering on it but the second time , what i did was i estimated the new wiener filter based on the cleaned up speech , and did , smoothing in the frequency to reduce the variance , i have i ' ve observed there are , like , a lot of bumps in the frequency when i do this wiener filtering which is more like a musical noise . and so by adding another stage of wiener filtering , the results on the speechdat - car was like , so , i still do n't have the word error rate . i ' m about it . but the overall improvement was like fifty - six point four six . this was again using ten frames of noise estimate and two stage of wiener filtering . and the rest is like the lda plu and the on - line normalization all remaining the same . so this was , like , compared to , fifty - seven is what you got by using the french telecom system , right ? phd b: so the new wiener filtering schema is like some fifty - six point four six which is like one percent still less than what you got using the french telecom system . professor c: but again , you 're more or less doing what they were doing , right ? phd b: it 's it 's different in a sense like i ' m actually cleaning up the cleaned up spectrum which they 're not doing . they 're d what they 're doing is , they have two stage stages of estimating the wiener filter , but the final filter , what they do is they take it to their time domain by doing an inverse fourier transform . and they filter the original signal using that fil filter , which is like final filter is acting on the input noisy speech rather than on the cleaned up . so this is more like i ' m doing wiener filter twice , but the only thing is that the second time i ' m actually smoothing the filter and then cleaning up the cleaned up spectrum first level . and so that 's what the difference is . and actually i tried it on s the original clean , the original spectrum where , like , i the second time i estimate the filter but actually clean up the noisy speech rather the c s first output of the first stage and that does n't seems to be a giving , that much improvement . i did n't run it for the whole case . and and what i t what i tried was , by using the same thing but , so we actually found that the vad is very , like , crucial . , just by changing the vad itself gives you the a lot of improvement by instead of using the current vad , if you just take up the vad output from the channel zero , when instead of using channel zero and channel one , because that was the p that was the reason why i was not getting a lot of improvement for estimating the noise . so used the channel zero vad to estimate the noise so that it gives me some reliable mar markers for this noise estimation . phd b: so because the channel zero and channel one are like the same speech , but only w , the same endpoints . but the only thing is that the speech is very noisy for channel one , so you can actually use the output of the channel zero for channel one for the vad . , that 's like a cheating method . professor c: so a are they going to pro what are they doing to do , do we know yet ? about as far as what they 're what the rules are going to be and what we can use ? phd d: and what they did finally is to , mmm , not to align the utterances but to perform recognition , only on the close - talking microphone , phd d: and to take the result of the recognition to get the boundaries , of speech . professor c: so it 's not like that 's being done in one place or one time . professor c: that 's that 's just a rule and we 'd you were permitted to do that . is is that it ? professor c: , so they will send files so everybody will have the same boundaries to work with ? phd b: but actually their alignment actually is not seems to be improving in like on all cases . phd d: , i so what happened here is that , the overall improvement that they have with this method so , to be more precise , what they have is , they have these alignments and then they drop the beginning silence and the end silence but they keep , two hundred milliseconds before speech and two hundred after speech . and they keep the speech pauses also . and the overall improvement over the mfcc baseline so , when they just , add this frame dropping in addition it 's r , forty percent , right ? fourteen percent , . phd d: which is , t which is the overall improvement . but in some cases it does n't improve . like , y do you remember which case ? phd b: so by using the endpointed speech , actually it 's worse than the baseline in some instances , which could be due to the word pattern . phd d: and , the other thing also is that fourteen percent is less than what you obtain using a real vad . phd d: so this shows that there is still work , working on the vad is still important . phd a: can i ask just a high level question ? can you just say like one or two sentences about wiener filtering and why are people doing that ? what 's what 's the deal with that ? phd b: ok , so the wiener filter , it 's like you try to minimize , so the basic principle of wiener filter is like you try to minimize the , d , difference between the noisy signal and the clean signal if you have two channels . like let 's say you have a clean t signal and you have an additional channel where what is the noisy signal . and then you try to minimize the error between these two . so that 's the basic principle . and you get you can do that , if you have only a c noisy signal , at a level which you , you w try to estimate the noise from the w assuming that the first few frames are noise or if you have a w voice activity detector , you estimate the noise spectrum . and then you phd b: in , after the speech starts . but that 's not the case in , many of our cases but it works reasonably . phd b: and and then you what you do is you , b fff . so again , write down some of these eq and then you do this , this is the transfer function of the wiener filter , so " sf " is a clean speech spectrum , power spectrum and " n " is the noisy power spectrum . and so this is the transfer function . phd b: and then you multiply your noisy power spectrum with this . you get an estimate of the clean power spectrum . but that you have to estimate the sf from the noisy spectrum , what you have . so you estimate the nf from the initial noise portions and then you subtract that from the current noisy spectrum to get an estimate of the sf . so sometimes that becomes zero because you do you do n't have a true estimate of the noise . so the f filter will have like sometimes zeros in it because some frequency values will be zeroed out because of that . and that creates a lot of discontinuities across the spectrum because @ the filter . so that 's what that was just the first stage of wiener filtering that i tried . professor c: it 's all pretty related , it 's it 's there 's a di there 's a whole class of techniques where you try in some sense to minimize the noise . and it 's typically a mean square sense , uh , i in some way . and , spectral subtraction is , one approach to it . phd a: do people use the wiener filtering in combination with the spectral subtraction typically , or is i are they competing techniques ? phd b: so it 's like i have n't seen anybody using s wiener filter with spectral subtraction . professor c: , in the long run you 're doing the same thing but y but there you make different approximations , and in spectral subtraction , there 's a an estimation factor . professor c: you sometimes will figure out what the noise is and you 'll multiply that noise spectrum times some constant and subtract that rather than and sometimes people even though this really should be in the power domain , sometimes people s work in the magnitude domain because it works better . and , . phd a: so why did you choose , wiener filtering over some other one of these other techniques ? phd b: , the reason was , like , we had this choice of using spectral subtraction , wiener filtering , and there was one more thing which i ' m trying , is this sub space approach . stephane is working on spectral subtraction . so i picked up phd b: y , we just wanted to have a few noise production compensation techniques and then pick some from that pick one . professor c: i m , there 's car - carmen 's working on another , on the vector taylor series . professor c: so they were just trying to cover a bunch of different things with this task and see , what are the issues for each of them . phd b: so one of the things that i tried , like i said , was to remove those zeros in the fri filter by doing some smoothing of the filter . like , you estimate the edge of square and then you do a f smoothing across the frequency so that those zeros get , like , flattened out . and that does n't seems to be improving by trying it on the first time . so what i did was like i p did this and then you i plugged in the one more the same thing but with the smoothed filter the second time . and that seems to be working . so that 's where i got like fifty - six point five percent improvement on speechdat - car with that . so the other thing what i tried was i used still the ten frames of noise estimate but i used this channel zero vad to drop the frames . so i ' m not still not estimating . and that has taken the performance to like sixty - seven percent in speechdat - car , which is which like shows that by using a proper vad you can just take it to further , better levels . phd b: so far i ' ve seen sixty - seven , no , i have n't seen s like sixty - seven percent . and , using the channel zero vad to estimate the noise also seems to be improving but i do n't have the results for all the cases with that . so i used channel zero vad to estimate noise as a lesser 2 x frame , which is like , everywhere i use the channel zero vad . and that seems to be the best combination , rather than using a few frames to estimate and then drop a channel . professor c: so i ' m still a little confused . is that channel zero information going to be accessible during this test . phd b: nnn , no . this is just to test whether we can really improve by using a better vad . so this is like the noise compensation f is fixed but you make a better decision on the endpoints . that 's , like seems to be so we c so , which means , like , by using this technique what we improve just the vad phd b: we can just take the performance by another ten percent or better . so , that was just the , reason for doing that experiment . and , w , but this all these things , i have to still try it on the ti - digits , which is like i ' m just running . and there seems to be not improving a lot on the ti - digits , so i ' m like investigating that , why it 's not . after that . so the other thing is like i ' ve been i ' m doing all this on the power spectrum . tried this on the mel as mel and the magnitude , and mel magnitude , and all those things . but it seems to be the power spectrum seems to be getting the best result . so , one of reasons like doing the averaging , after the filtering using the mel filter bank , that seems to be maybe helping rather than trying it on the mel filter ba filtered outputs . so just th phd b: th that 's the only thing that i could think of why it 's giving improvement on the mel . and , . so that 's it . phd b: subspace , i ' m like that 's still in a little bit in the back burner because i ' ve been p putting a lot effort on this to make it work , on tuning things and other . i was like going parallely but not much of improvement . i ' m just have some skeletons ready , need some more time for it . phd d: , . so , i ' ve been , working still on the spectral subtraction . so to r to remind you a little bit of what i did before , is just to apply some spectral subtraction with an overestimation factor also to get , an estimate of the noise , spectrum , and subtract this estimation of the noise spectrum from the , signal spectrum , but subtracting more when the snr is , low , which is a technique that it 's often used . phd d: so you overestimate the noise spectrum . you multiply the noise spectrum by a factor , which depends on the snr . so , above twenty db , it 's one , so you just subtract the noise . and then it 's b generally , i use , actually , a linear , function of the snr , which is bounded to , like , two or three , when the snr is below zero db . , doing just this , either on the fft bins or on the mel bands , t does n't yield any improvement phd d: o so there is also a threshold , because after subtraction you can have negative energies , and so what do is to put , to add to put the threshold first and then to add a small amount of noise , which right now is speech - shaped . phd d: so it 's a it has the overall energy , pow it has the overall power spectrum of speech . so with a bump around one kilohertz . phd a: so when y when you talk about there being something less than zero after subtracting the noise , is that at a particular frequency bin ? phd a: and so when you say you 're adding something that has the overall shape of speech , is that in a particular frequency bin ? or you 're adding something across all the frequencies when you get these negatives ? phd d: for each frequencies i a i ' m adding some , noise , but the a the amount of noise i add is not the same for all the frequency bins . phd d: . right now i do n't think if it makes sense to add something that 's speech - shaped , because then you have silence portion that have some spectra similar to the sp the overall speech spectra . but so this is something still work on , phd a: i ' m trying to understand what it means when you do the spectral subtraction and you get a negative . it means that at that particular frequency range you subtracted more energy than there was actually phd d: that means that so so , you have an estimation of the noise spectrum , but sometimes , it 's as the noise is not perfectly stationary , sometimes this estimation can be , too small , so you do n't subtract enough . but sometimes it can be too large also . if if the noise , energy in this particular frequency band drops for some reason . phd a: so in an ideal word i world if the noise were always the same , then , when you subtracted it the worst that i you would get would be a zero . , the lowest you would get would be a zero , cuz i if there was no other energy there you 're just subtracting exactly the noise . professor c: , there 's all sorts of , deviations from the ideal here . , you 're talking about the signal and noise , at a particular point . and even if something is stationary in ster terms of statistics , there 's no guarantee that any particular instantiation or piece of it is exactly a particular number or bounded by a particular range . so , you 're figuring out from some chunk of the signal what you think the noise is . then you 're subtracting that from another chunk , and there 's no reason to think that you 'd know that it would n't , be negative in some places . , on the other hand that just means that in some sense you ' ve made a mistake because you certainly have stra subtracted a bigger number than is due to the noise . also , we speak the whole where all this comes from is from an assumption that signal and noise are uncorrelated . and that certainly makes sense in s in a statistical interpretation , that , over , all possible realizations that they 're uncorrelated or assuming , ergodicity that i , across time , it 's uncorrelated . but if you just look at a quarter second , and you cross - multiply the two things , you could very , end up with something that sums to something that 's not zero . so , the two signals could have some relation to one another . and so there 's all sorts of deviations from ideal in this . and and given all that , you could definitely end up with something that 's negative . but if down the road you 're making use of something as if it is a power spectrum , then it can be bad to have something negative . now , the other thing i wonder about actually is , what if you left it negative ? what happens ? professor c: , because , are you taking the log before you add them up to the mel ? professor c: so , i wonder how if you put your thresholds after that , i wonder how often you would end up with , with negative values . phd b: but you will but you end up reducing some neighboring frequency bins @ in the average , right ? when you add the negative to the positive value which is the true estimate . professor c: but nonetheless , , these are it 's another f smoothing , right ? that you 're doing . so , you ' ve done your best shot at figuring out what the noise should be , and now i then you ' ve subtracted it off . and then after that , instead of , leaving it as is and adding things adding up some neighbors , you artificially push it up . which is , it 's there 's no particular reason that 's the right thing to do either , so , i , what you 'd be doing is saying , " , we 're d we 're going to definitely diminish the effect of this frequency in this little frequency bin in the overall mel summation " . it 's just a thought . i d i if it would be phd a: the opposite of that would be if you find out you 're going to get a negative number , you do n't do the subtraction for that bin . phd a: that would be almost the opposite , instead of leaving it negative , you do n't do it . if your if your subtraction 's going to result in a negative number , you do n't do subtraction in that . professor c: but that means that in a situation where you thought that the bin was almost entirely noise , you left it . phd d: and , some people also if it 's a negative value they , re - compute it using inter interpolation from the edges and bins . professor c: people can also , reflect it back up and essentially do a full wave rectification instead of a instead of half wave . but it was just a thought that it might be something to try . phd d: , actually i tried , something else based on this , is to put some smoothing , because it seems to help or it seems to help the wiener filtering and , mmm so what i did is , some nonlinear smoothing . actually i have a recursion that computes , let me go back a little bit . actually , when you do spectral subtraction you can , find this equivalent in the s in the spectral domain . you can compute , y you can say that d your spectral subtraction is a filter , and the gain of this filter is the , signal energy minus what you subtract , divided by the signal energy . and this is a gain that varies over time , and , , depending on the s on the noise spectrum and on the speech spectrum . what happen actually is that during low snr values , the gain is close to zero but it varies a lot . mmm , and this is the of musical noise and all these the fact you we go below zero one frame and then you can have an energy that 's above zero . so the smoothing is i did a smoothing actually on this gain , trajectory . but it 's the smoothing is nonlinear in the sense that i tried to not smooth if the gain is high , because in this case we know that , the estimate of the gain is correct because we are not close to zero , and to do more smoothing if the gain is low . so , that 's this idea , and it seems to give pretty good results , although i ' ve just tested on italian and finnish . and on italian it seems my result seems to be a little bit better than the wiener filtering , phd d: , i if you have these improvement the detailed improvements for italian , finnish , and spanish there professor c: so these numbers he was giving before with the four point three , and the ten point one , and , those were italian , right ? phd b: so so , no , i actually did n't give you the number which is the final one , phd b: which is , after two stages of wiener filtering . , that was , like the overall improvement is like fifty - six point five . , his number is still better than what i got in the two stages of wiener filtering . professor c: but do you have numbers in terms of word error rates on italian ? so just so you have some sense of reference ? phd d: and then , d , nine point , one . and finally , sixteen point five . phd d: plus plus nonlinear smoothing . , it 's the system it 's exactly the sys the same system as sunil tried , phd a: what is it the , france telecom system uses for do they use spectral subtraction , or wiener filtering , or ? phd b: , filtering . , it 's not exactly wiener filtering but some variant of wiener filtering . phd b: s they have like th the just noise compensation technique is a variant of wiener filtering , plus they do some smoothing techniques on the final filter . the th they actually do the filtering in the time domain . so they would take this hf squared back , taking inverse fourier transform . and they convolve the time domain signal with that . phd d: one in the time domain and one in the frequency domain by just taking the first , coefficients of the impulse response . phd d: because you have also two smoothing . one in the time domain , and one in the frequency domain , phd a: does the smoothing in the time domain help , do you get this musical noise with wiener filtering or is that only with , spectral subtraction ? phd a: does the smoothing in the time domain help with that ? or some other smoothing ? professor c: , it 's not clear that these musical noises hurt us in recognition . we if they do . , they sound bad . phd d: , actually the smoothing that i did do here reduced the musical noise . , it phd d: , i can not you can not hear beca , actually what i d did not say is that this is not in the fft bins . this is in the mel frequency bands . it could be seen as a f a smoothing in the frequency domain because i used , in ad mel bands in addition and then the other phase of smoothing in the time domain . but , when you look at the spectrogram , if you do n't have an any smoothing , you clearly see , like in silence portions , and at the beginning and end of speech , you see spots of high energy randomly distributed over the spectrogram . phd d: which is musical noise , if it if you listen to it , if you do this in the fft bins , then you have spots of energy randomly distributing . and if you f if you re - synthesize these spot sounds as , like , sounds , professor c: , none of these systems , have , y you both are working with , our system that does not have the neural net , so one would hope , presumably , that the neural net part of it would improve things further as they did before . phd d: although if we , look at the result from the proposals , one of the reason , the n system with the neural net was , more than , around five percent better , is that it was much better on highly mismatched condition . i ' m thinking , on the ti - digits trained on clean speech and tested on noisy speech . , for this case , the system with the neural net was much better . but not much on the in the other cases . if we have no , spectral subtraction or wiener filtering , i the system is , we thought the neural network is much better than before , even in these cases of high mismatch . so , maybe the neural net will help less but , professor c: it could do a nonlinear spectral subtraction but i if it , you have to figure out what your targets are . phd a: i was thinking if you had a clean version of the signal and a noisy version , and your targets were the m f - , whatever , frequency bins professor c: , that 's not so much spectral subtraction then , but it 's but at any rate , people , professor c: y , we had visitors here who did that when you were here ba way back when . people d done lots of experimentation over the years with training neural nets . and it 's not a bad thing to do . it 's another approach . m , it 's it , the objection everyone always raises , which has some truth to it is that , it 's good for mapping from a particular noise to clean but then you get a different noise . and the experiments we saw that visitors did here showed that it there was at least some , gentleness to the degradation when you switched to different noises . it did seem to help . so that you 're right , that 's another way to go . phd a: how did it compare on , for good cases where it , that it was trained on ? did it do pretty ? professor c: , it did very . but to some extent that 's what we 're doing . , we 're not doing exactly that , we 're not trying to generate good examples but by trying to do the best classifier you possibly can , for these little phonetic categories , professor c: it 's it 's built into that . and and that 's why we have found that it does help . so , , we 'll just have to try it . but i would imagine that it will help some . , it we 'll just have to see whether it helps more or less the same , but i would imagine it would help some . so in any event , all of this i was just confirming that all of this was with a simpler system . ok ? phd d: , so this is th the , actually , this was the first try with this spectral subtraction plus smoothing , and i was excited by the result . , then i started to optimize the different parameters . and , the first thing i tried to optimize is the , time constant of the smoothing . and it seems that the one that i chose for the first experiment was the optimal one , so , phd d: so this is the first thing . , another thing that i it 's important to mention is , that this has a this has some additional latency . because when i do the smoothing , it 's a recursion that estimated the means , so of the g of the gain curve . this is a filter that has some latency . and i noticed that it 's better if we take into account this latency . so , instead o of using the current estimated mean to , subtract the current frame , it 's better to use an estimate that 's some somewhere in the future . phd b: you mean , the m the mean is computed o based on some frames in the future also ? or or no ? phd d: it 's the recursion , so it 's the center recursion , and the latency of this recursion is around fifty milliseconds . phd d: the mean estimation has some delay , the filter that estimates the mean has a time constant . professor c: worse . it 's depending on how all this comes out we may or may not be able to add any latency . phd d: , but so , it depends . , y actually , it 's l it 's three percent . b but i do n't think we have to worry too much on that right now while you kno . professor c: s , the only thing is that i would worry about it a little . because if we completely ignore latency , and then we discover that we really have to do something about it , we 're going to be find ourselves in a bind . , maybe you could make it twenty - five . what ? , just be a little conservative professor c: because we may end up with this crunch where all of a sudden we have to cut the latency in half . phd d: s , there are other things in the , algorithm that i did n't , @ a lot yet , which phd a: a quick question just about the latency thing . if if there 's another part of the system that causes a latency of a hundred milliseconds , is this an additive thing ? or c or is yours hidden in that ? phd b: we can do something in parallel also , in some like some cases like , if you wanted to do voice activity detection . and we can do that in parallel with some other filtering you can do . so you can make a decision on that voice activity detection and then you decide whether you want to filter or not . but by then you already have the sufficient samples to do the filtering . so , sometimes you can do it anyway . phd a: , could n't , i could n't you just also , i if that the l the largest latency in the system is two hundred milliseconds , do n't you could n't you just buffer up that number of frames and then everything uses that buffer ? and that way it 's not additive ? professor c: , everything is sent over in buffers cuz of is n't it the tcp buffer some ? phd b: you mean , the data , the super frame ? , but that has a variable latency because the last frame does n't have any latency and first frame has a twenty framed latency . so you ca n't r rely on that latency all the time . because the transmission over the air interface is like a buffer . twenty frame twenty four frames . but the only thing is that the first frame in that twenty - four frame buffer has a twenty - four frame latency . and the last frame does n't have any latency . because it just goes as phd a: , i was n't thinking of that one in particular but more of , if there is some part of your system that has to buffer twenty frames , ca n't the other parts of the system draw out of that buffer and therefore not add to the latency ? professor c: and and that 's one of the all of that is things that they 're debating in their standards committee . phd d: there is , these parameters that i still have to look at . like , i played a little bit with this overestimation factor , but i still have to look more at this , at the level of noise i add after . , i know that adding noise helped , the system just using spectral subtraction without smoothing , but i right now if it 's still important or not , and if the level i choose before is still the right one . same thing for the shape of the noise . maybe it would be better to add just white noise instead of speech shaped noise . phd d: , and another thing is to for this use as noise estimate the mean , spectrum of the first twenty frames of each utterance . i do n't remember for this experiment what did you use for these two stage phd b: , the reason was like in ti - digits i do n't have a lot . i had twenty frames most of the time . phd d: but , so what 's this result you told me about , the fact that if you use more than ten frames you can improve by t phd b: , that 's using the channel zero . if i use a channel zero vad to estimate the noise . phd d: ok . but in this experiment i did i did n't use any vad . used the twenty first frame to estimate the noise . and so i expected it to be a little bit better , if , i use more frames . ok , that 's it for spectral subtraction . the second thing i was working on is to , try to look at noise estimation , mmm , and using some technique that does n't need voice activity detection . and for this i u simply used some code that , i had from belgium , which is technique that , takes a bunch of frame , and for each frequency bands of this frame , takes a look at the minima of the energy . and then average these minima and take this as an energy estimate of the noise for this particular frequency band . and there is something more to this actually . what is done is that , these minima are computed , based on , high resolution spectra . so , i compute an fft based on the long , signal frame which is sixty - four millisecond phd d: what what i d , i do actually , is to take a bunch of to take a tile on the spectrogram and this tile is five hundred milliseconds long and two hundred hertz wide . and this tile , in this tile appears , like , the harmonics if you have a voiced sound , because it 's the ftt bins . and when you take the m the minima of these this tile , when you do n't have speech , these minima will give you some noise level estimate , if you have voiced speech , these minima will still give you some noise estimate because the minima are between the harmonics . and if you have other speech sounds then it 's not the case , but if the time frame is long enough , like s five hundred milliseconds seems to be long enough , you still have portions which , are very close whi which minima are very close to the noise energy . phd d: sixty - four milliseconds is to compute the fft , bins . the the fft . actually it 's better to use sixty - four milliseconds because , if you use thirty milliseconds , then , because of the this short windowing and at low pitch , sounds , the harmonics are not , wha , correctly separated . so if you take these minima , it b they will overestimate the noise a lot . professor c: so you take sixty - four millisecond f ts and then you average them over five hundred ? , what do you do over five hundred ? phd d: so i take to i take a bunch of these sixty - four millisecond frame to cover five hundred milliseconds , and then i look for the minima , phd d: on the bunch of fifty frames , right ? so the interest of this is that , as y with this technique you can estimate u some reasonable noise spectra with only five hundred milliseconds of signal , so if the n the noise varies a lot , you can track better track the noise , which is not the case if you rely on the voice activity detector . so even if there are no speech pauses , you can track the noise level . the only requirement is that you must have , in these five hundred milliseconds segment , you must have voiced sound at least . cuz this these will help you to track the noise level . so what i did is just to simply replace the vad - based , noise estimate by this estimate , first on speechdat - car , only on speechdat - car actually . and it 's , slightly worse , like one percent relative compared to the vad - based estimates . the reason why it 's not better , is that the speechdat - car noises are all stationary . so , u y there really is no need to have something that 's adaptive , they are mainly stationary . but , i expect s maybe some improvement on ti - digits because , nnn , in this case the noises are all sometimes very variable . , so i have to test it . professor c: but are you comparing with something e i ' m p s a little confused again , i it , when you compare it with the v a d - based , vad - is this is this the ? phd d: it 's the france - telecom - based spectra , s , wiener filtering and vad . so it 's their system but just i replace their noise estimate by this one . phd d: in i i ' m not no , no . it 's our system but with just the wiener filtering from their system . right ? actually , th the best system that we still have is , our system but with their noise compensation scheme , phd d: so i ' m trying to improve on this , and by replacing their noise estimate by , something that might be better . professor c: but the spectral subtraction scheme that you reported on also re requires a noise estimate . could n't you try this for that ? phd d: and i was working on one and the other . , for i will . try also , mmm , the spectral subtraction . phd b: so i ' m also using that n new noise estimate technique on this wiener filtering what i ' m trying . so i have , like , some experiments running , i do n't have the results . i do n't estimate the f noise on the ten frames but use his estimate . phd d: i , also implemented a sp spectral whitening idea which is in the , ericsson proposal . , the idea is just to , flatten the log , spectrum , and to flatten it more if the probability of silence is higher . so in this way , you can also reduce somewhat reduce the musical noise and you reduce the variability if you have different noise shapes , because the spectrum becomes more flat in the silence portions . with this , no improvement , but there are a lot of parameters that we can play with actually , this could be seen as a soft version of the frame dropping because , you could just put the threshold and say that " below the threshold , i will flatten comp completely flatten the spectrum " . and above this threshold , keep the same spectrum . so it would be like frame dropping , because during the silence portions which are below the threshold of voice activity probability , w you would have some dummy frame which is a perfectly flat spectrum . and this , whitening is something that 's more soft because , you whiten you just , have a function the whitening is a function of the speech probability , so it 's not a hard decision . so maybe it can be used together with frame dropping and when we are not about if it 's speech or silence , maybe it has something do with this . professor c: it 's interesting . , , in jrasta we were essentially adding in , white noise dependent on our estimate of the noise . on the overall estimate of the noise . , it never occurred to us to use a probability in there . you could imagine one that made use of where the amount that you added in was , a function of the probability of it being s speech or noise . phd d: , w , right now it 's a constant that just depending on the noise spectrum . professor c: cuz that brings in powers of classifiers that we do n't really have in , this other estimate . so it could be interesting . what what point does the , system stop recording ? how much phd d: so . , so there are with this technique there are some did something exactly the same as the ericsson proposal but , the probability of speech is not computed the same way . and , i for , for a lot of things , actually a g a good speech probability is important . like for frame dropping you improve , like you can improve from ten percent as sunil showed , if you use the channel zero speech probabilities . for this it might help , s so , . the next thing i started to do is to , try to develop a better voice activity detector . i d , for this we can maybe try to train the neural network for voice activity detection on all the data that we have , including all the speechdat - car data . and so i ' m starting to obtain alignments on these databases . , and the way i mi i do that is that use the htk system but i train it only on the close - talking microphone . and then i aligned i obtained the viterbi alignment of the training utterances . it seems to be , actually what i observed is that for italian it does n't seem th - there seems to be a problem . phd b: so , it does n't seems to help by their use of channel zero or channel one . phd d: the c the current vad that we have was trained on , t spine , right ? phd d: italian , and ti - digits with noise and and it seems to work on italian but not on the finnish and spanish data . so , maybe one reason is that s finnish and spanish noise are different . actually we observed we listened to some of the utterances and sometimes for finnish there is music in the recordings and strange things , so the idea was to train all the databases and obtain an alignment to train on these databases , and , also to , try different features , as input to the vad network . we came up with a bunch of features that we want to try like , the spectral slope , the degree o degree of voicing with the features that , we started to develop with carmen , e with , the correlation between bands and different features , and . professor c: , hans - guenter will be here next week so he 'll be interested in all of these things . and , so . mmm . ###summary: the icsi meeting recorder group of berkeley met for the first time in two weeks. group members reported their progress in the areas of spectral subtraction , wiener filtering and noise estimation. they also discusses topics relating to the rules and preferences of the project they are working on , including single vs multiple passes. a number of the group also took time to explain the basics of their approaches to the group. there are hopes that a visitor coming for three weeks , may lead to a longer term collaboration. the visitor works on spectral subtraction , so speaker me026 will make sure he talks to him. speaker mn007 agreed , at me013's suggestion , to try his noise compensation scheme in compensation with the prior work on spectral subtraction. in implementing smoothing to the spectral subtraction , latency has been increased; while some feel this is nothing to worry about , others feel it is better to worry now , in case it turns out to be something to worry about. speaker me026 has been experimenting with spectral subtraction using different data window sizes. one possible idea is to use increasing windows as more data becomes available. speaker mn049 has been working on wiener filtering , and testing with just the base system provides 30% improvement. using a second stage of filtering led to even more improvement. speaker mn007 is working on spectral subtraction , still with minimal results. smoothing seems to help , and implementing alongside the neural net should also be positive. he has also been working on noise estimation with an energy minima approach that does not require the voice activity detector.
33
grad b: , the remote will do it ok . cuz i ' m already up there ? grad b: so , let 's see . which one of these buttons will do this for me ? aha ! grad c: , " the search for the middle layer " . it 's talks about it just refers to the fact that one of main things we had to do was to decide what the intermediate nodes were , grad a: but if you really want to find out what it 's about you have to click on the little light bulb . grad b: although i ' ve never i what the light bulb is for . i did n't i install that into my powerpoint presentation . grad a: it opens the assistant that tells you that the font type is too small . do you wanna try ? grad c: can you maximize the window so all that on the side is n't does n't appear ? grad b: do that , but then i have to end the presentation in the middle so go back to open up grad b: is that better ? i 'll also get rid of this " click to add notes " . grad b: so then the features we decided or we decided we were talked about , right ? the prosody , the discourse , verb choice . we had a list of things like " to go " and " to visit " and what not . the " landmark - iness " of i knew you 'd like that . grad b: , of a building . whether the and this i we actually have a separate feature but i decided to put it on the same line for space . walls which we can look up because if you 're gon na get real close to a building in the tango mode , right , there 's got ta be a reason for it . and it 's either because you 're in route to something else or you wanna look at the walls . the context , which in this case we ' ve limited to " business person " , " tourist " , or " unknown " , the time of day , and " open to suggestions " , is n't actually a feature . it 's " we are open to suggestions . " grad d: right . can ask the walls part of it is that , in this particular domain you said be i it could be on two different lines but are you saying that in this particular domain it happens the that landmark - iness cor is correlated with grad b: i either could put " walls " on its own line or " open to suggestions " off the slide . grad b: but if it 's architecturally significant you might be able to see it from like you m might be able to " vista " it , and be able to grad b: , versus , like , i was at this place in europe where they had little carvings of , like , dead people on the walls . i do n't remember w it was a long time ago . grad b: but if you looked at it real close , you could see the in intricacy of the walls . grad a: there is a term that 's often used . that 's " saliency " , or the " salience " of an object . and i was just wondering whether that 's the same as what you describe as " landmark - iness " . but it 's really not . an object can be very salient but not a landmark . grad d: not a landmark . there 's landmark for , touristic reasons and landmark for i navigational reasons . grad d: but you can imagine maybe wanting the oth both kinds of things there for different , goals . right ? grad b: but tourist - y landmarks also happen to be would n't could n't they also be they 're not exclusive groups , are they ? like non - tourist - y landmarks and grad b: ok , so our initial idea was not very satisfying , because our initial idea was all the features pointing to the output node . grad b: and , so we reasons being , it 'd be a pain to set up all the probabilities for that . if we moved onto the next step and did learning of some sort , according bhaskara we 'd be handicapped . i belief - nets very . grad c: usually , , n if you have n features , then it 's two to the n or exponential in n . grad b: so then our next idea was to add a middle layer , so the thinking behind that was we have the features that we ' ve drawn from the communication of some like , the someone s the person at the screen is trying to communicate some abstract idea , like " i ' m " the abstract idea being " i am a tourist i want to go to this place . " right ? so we 're gon na set up features along the lines of where they want to go and what they ' ve said previously and whatnot . and then we have the means that they should use . but the middle thing , we were thinking along the lines of maybe trying to figure out , like , the concept of whether they 're a tourist or whether they 're running an errand like that along those lines . or yes , we could things we could n't extract the from the data , the hidden variables . yes , good . so then the hidden variables hair variables we came up with were whether someone was on a tour , running an errand , or whether they were in a hurry , because we were thinking , if they were in a hurry there 'd be less likely to like or th grad c: want to do vista , right ? because if you want to view things you would n't be in a hurry . grad b: or they might be more likely to be using the place that they want to go to as a like a navigational point to go to another place . whether the destination was their final destination , whether the destination was closed . those are all and then " let 's look at the belief - net " ok . so that means that i should switch to the other program . right now it 's still in a toy version of it , because we did n't know the probabilities of or i 'll talk about it when i get the picture up . grad b: so this right what we let 's see . what happens if i maximize this ? there we go . so . the mode has three different outputs . the probability whether the probability of a vista , tango , or enter . the " context " , we simplified . it 's just the businessman , the tourist , unknown . verb used is actually personally amusing mainly because it 's just whether the verb is a tango verb , an enter verb , or a vista verb . grad c: not . that 's that needs a lot of work . but that would ' ve made the probably significantly be more complicated to enter , grad c: so we decided that for the purposes of this it 'd be simpler to just have three verbs . grad b: why do n't you mention things about this , bhaskara , that i am not that are not coming to my mind right now . grad c: ok , so note the four nodes down there , the things that are not directly extracted . actually , the five things . the " closed " is also not directly extracted i , from the grad c: right , so f but the other ones , the final destination , the whether they 're doing business , whether they 're in a hurry , and whether they 're tourists , that thing is all probabilistically depends on the other things . grad c: so we have n't , managed like we do n't have nodes for " discourse " and " parse " , although like in some sense they are parts of this belief - net . but the idea is that we just extract those features from them , so we do n't actually have a node for the entire parse , because we 'd never do inference on it anyway , so . grad d: so some of the top row of things what 's what 's " disc admission fee " ? grad c: whether they discuss the admission fees . so we looked at the data and in a lot of data people were saying things like " can i get to this place ? " what is the admission fee ? . so that 's like a huge clue that they 're trying to enter the place rather than to tango or vista , grad b: there were there 'd be other things besides just the admission fee , but , we did n't have grad d: so there are certain cues that are very strong either lexical or topic - based , concept cues grad d: for one of those . and then in that second row or whatever that row of time of day through that so all of those some of them come from the utterance and some of them are either world knowledge or situational things . so that you have no distinction between those and grad a: . i would actually suggest we go through this one more time so we all , agree on what the meaning of these things is at the moment and maybe what changes we grad b: , th so one thing i ' m unsure about , is how we have the discus the " admission fee " thing set up . so one thing that we were thinking was by doing the layers like this , we kept things from directly affecting the mode beyond the concept , but you could see perhaps discus the " admission fee " going directly to the mode pointing at " enter " , versus pointing to just at " tourist " , ok ? but we just decided to keep all the things we extracted to point at the middle and then down . grad a: why is the landmark the landmark is facing to the tourists . that 's because we 're talking about landmarks as touristic landmarks not as possible grad c: so let 's see . the variables . disc - " admission fee " is a binary thing , time of day is like morning , afternoon , night . is that the deal ? grad b: that 's how we have it currently set up , but it could be , based upon hour grad c: normally context will include a huge amount of information , but , we are just using the particular part of the context which consists of the switch that they flick to indicate whether they 're a tourist or not , i . grad c: so so it 's not really all of context . similarly prosody is not all of prosody but simply for our purposes whether or not they appear tense or relaxed . grad a: that 's very , ? the the so the context is a switch between tourist or non - tourist ? grad d: so final dest so it seems like that would really help you for doing business versus tourist , grad d: but so the context being , e i if that question 's in general , are you the ar are do they allow business people to be doing non - business things at the moment ? grad c: , right . so then landmark is verb used is like , right now we only have three values , but in general they would be a probability distribution over all verbs . rather , let me rephrase that . it it can take values in the set of all verbs , that they could possibly use . " walls " is binary , closed is binary final destination , again , all those are binary i . and " mode " is one of three things . grad c: walls is something that we extract from our world knowledge . , a . it is binary . grad a: so we can either be in a hurry or not , but we can not be in a medium hurry at the moment ? grad c: , we to do that we would add another value for that . and that would require s updating the probability distribution for " mode " as . because it would now have to like take that possibility into account . grad d: take a conti so , this will happen when we think more about the kinds of verbs that are used in each cases grad d: but you can imagine that it 's verb plus various other things that are also not in the bottom layer that would help you like it 's a conjunction of , i , the verb used and some other that would determine grad d: usually . i if that 's always the case i have n't looked at the data as much as you guys have . so . grad a: that 's always warping on something some entity , and maybe at this stage we will we do want to get modifiers in there grad c: we can do a demo in the sense that we can , just ob observe the fact that this will , do inference . grad c: so we can , set some of the nodes and then try to find the probability of other nodes . grad c: just se set a few of them . you do n't have to do the whole thing that we did last time . just like , maybe the fact that they use a certain verb actually forget the verb . just i , say they discussed the admission fee and the place has walls grad c: no that that does n't it 's not really consistent . they do n't discuss the admission fee . make that false . grad b: one thing that bugs me about javabayes is you have to click that and do this . grad c: so that is the probability that they 're entering , vista - ing or tango - ing . and grad b: if it 's night time , they have not discussed admission fee , and the n walls are . so , . i that makes sense . the reason i say the demo does n't work very is yesterday we observed everything in favor of taking a tour , and it came up as " tango " , over and over again . we could n't we could n't figure out how to turn it off of " tango " . grad c: like , we hand - tuned the probabilities , we were like " , if the person does this and this , let 's say forty percent for this , fifty per " like , . so that 's gon na happen . grad a: however , it the purpose was not really , at this stage , to come up with meaningful probabilities but to get thinking about that hidden middle layer . and so th and grad b: we would actually i once we look at the data more we 'll get more hidden nodes , but i 'd like to see more . not because it would expedite the probabilities , cuz it would n't . it would actually slow that down tremendously . grad b: no , we should have exponentially more middle nodes than features we ' ve extracted . i ' m ju i ' m just jo grad d: so . are " doing business " versus " tourist " they refer to your current task . like like current thing you want to do at this moment . grad c: . , that 's that 's an interesting point . whether you 're it 's whether it 's not grad c: it 's more like " are you are tourist ? are you in ham - like heidelberg for a " grad c: that 's a different thing . what if the context , which is not set , but still they say things like , " i want to go , see the castle and , et cetera . " grad a: so if you run out of cash as a tourist , and you need to go to the at grad d: , i see , you may have a task . wh you have to go get money and so you are doing business at that stage . grad c: and that 'll affect whether you want to enter or you if you kinda thing . grad d: so the " tourists " node should be , very consistent with the context node . if you say that 's more their in general what their background is . grad c: this context node is a bit of a like in d do we wanna have like it 's grad d: are you assuming that or not ? like is that to be if that 's accurate then that would determine tourist node . grad c: if the context were to set one way or another , that like strongly , says something about whether or not they 're tourists . so what 's interesting is when it 's not when it 's set to " unknown " . grad c: right now we have n't observed it , so i it 's averaging over all those three possibilities . but yes , you can set it to un " unknown " . grad a: and if we now do leave everything else as is the results should be the same , grad c: no , because we th - the way we set the probabilities might not have it 's an issue , like grad c: it is . so the issue is that in belief - nets , it 's not common to do what we did of like having , a d bunch of values and then " unknown " as an actual value . what 's common is you just like do n't observe the variable , right , and then just marginalizes but we did n't do this because we felt that there 'd i we were thinking in terms of a switch that actually grad c: but i y what the right thing is to do for that . i ' m not i if i am happy with the way it is . grad a: why do n't we can we , how long would it take to add another node on the observatory and , play around with it ? grad a: let 's just say make it really simple . if we create something that would be so th some things can be landmarks in your sense but they can never be entered ? so s a statue . grad a: so maybe we wanna have " landmark " meaning now " enterable landmark " versus , something that 's simply just a vista point , . grad c: so it 's addressing a variable that 's " enterable or not " . so like an " enterable , question mark " . grad b: also , did n't we have a size as one ? the size of the landmark . grad c: . not when we were doing this , but i at some point we did . grad b: for some reason i had that ok , that was a thought that i had at one point but then went away . grad c: so you want to have a node for like whether or not it can be entered ? grad a: , if we include that , accessibility , is it can it be entered ? then , this is binary as . and then , there 's also the question whether it may be entered . in the sense that , if it 's tom the house of tom cruise , it 's enterable but you may not enter it . you 're not allowed to . unless you are , whatever , his divorce lawyer . and and these are very observable from the ontology things . grad b: way does it actually help to distinguish between those two cases though ? whether it 's practically speaking enterable , or actually physically enterable or not ? grad a: y if you 're running an errand you maybe more likely to be able to enter places that are usually not al w you 're not usually not allowed to m grad d: it seems like it would for , determining whether they wanna go into it or not . grad a: let 's get this b clearer . s so it 's matrix between if it 's not enterable , period . grad b: whether it 's a public building , and whether it 's actually has a door . grad b: so tom cruise 's house is not a public building but it has a door . grad b: ok , sh explain to me why it 's necessary to distinguish between whether something has a door and is not public . or , if something it seems like it 's equivalent to say that it does n't have a door a and it or " not public " and " not a door " are equivalent things , it seems like in practice . grad a: right . so we would have what does it mean , then , that we have to we have an object type statue . that really is an object type . so there is there 's gon na be a bunch of statues . and then we have , an object type , that 's a hotel . how about hotels ? so , the most famous building in heidelberg is actually a hotel . it 's the hotel zum ritter , which is the only renaissance building in heidelberg that was left after the big destruction and for the thirty years war , blah - blah . grad a: it has wonderful walls . - and lots of detail , c and carvings , engravings and , so . but , it 's still an unlikely candidate for the tango mode i must say . but . so s so if you are a d it 's very tricky . so i your question is so far i have no really arg no real argument why to differentiate between statues as statues and houses of celebrities , from that point of view . let 's do a can we add , just so see how it 's done , a " has door " property or ? grad a: , , it might affect actually it 's it would n't affect any of our nodes , grad b: you could affect theoretically you could affect " doing business " with " has door " . grad c: i if javabayes is about that . it might be that if you add a new thing pointing to a variable , you just like it just overwrites everything . but you can check . grad b: , we have it saved . so . we can rel open it up again . grad c: that 's fine , but we have to see the function now . has it become all point fives or not ? grad b: so this is " has door " , true , false . that 's acceptable . and i want to edit the function going to that , no . grad c: what would be if it is if it just like kept the old function for either value but . nope . did n't do it . grad a: ok , so just dis dismiss everything . close it and load up the old state so it does n't screw that up . maybe you can read in ? grad d: yes . really i ha i ' ve i have n't used it a lot and i have n't used it in the last many months so , we can ask someone . grad c: it might be worth asking around . like , we looked at a page that had like a bunch of grad c: in a way this is a lot of good features in java it 's cra has a gui and it 's i those are the main two things . it does learning , it has grad b: no it does n't , actually . i did n't think it did learning . maybe it did a little bit of learning , i do n't remember . grad c: but , maybe another thing that but its interface is not the greatest . so . grad a: command line . what is the c code ? can w can we see that ? how do you write the code grad c: there is actually a text file that you can edit . but it 's you do n't have to do that . grad c: it 'll ask you what it wants what you want to open it with and see what bat , i . grad b: the carriage returns on some of them . how they get " auto - fills " i , grad c: that 's how actual probability tables are specified . as , like , lists of numbers . so theoretically you could edit that . grad c: we can maybe write an interface th for entering probability distributions easily , something like a little script . that might be worth it . grad d: i actually seem to recall srini complaining about something to do with entering probability so this is probably grad c: i if it actually manipulate the source , though . that might be a bit complicated . it might be simpler to just have a script that , it 's , like , friendly , grad a: but if th if there is an xml file that or format that it can also read it just reads this , when it starts . grad b: i know there is an i was looking on the we web page and he 's updated it for an xml version of i bayes - nets . there 's a bayes - net spec for in xml . grad c: he 's like this guy has ? the javabayes guy ? so but , e he does n't use it . so in what sense has he updated it ? grad b: because at least the i could have misread the web page , i have a habit of doing that , but . grad b: do i have more slides ? yes , one more . future work . every presentation have a should have a " future work " slide . but it 's we already talked about all this , so . grad c: . the additional thing is i learning the probabilities , also . e that 's maybe , i if grad b: and if you have a presentation that does n't have something that does n't work , then you have " what i learned " , as a slide . grad b: you could . my first approach failed . what i learned . ok , so that our presentation 's finished . grad b: i like about these meetings is one person will nod , and then the next person will nod , and then it just goes all the way around the room . grad b: no i earlier i went { nonvocalsound } and bhaskara went { nonvocalsound } and you did it . you did it . grad d: so a more general thing than " discussed admission fee " , could be i ' m just wondering whether the context , the background context of the discourse might be if there 's a way to define it or maybe generalize it some way , there might be other cues that , say , in the last few utterances there has been something that has strongly associated with say one of the particular modes , i if that might be grad d: , and into that node would be various things that could have specifically come up . grad a: a general strategy here , this is excellent because it gets you thinking along these terms is that maybe we ob we could observe a couple of discourse phenomena such as the admission fee , and something else , that happened in the discourse before . and let 's make those four . and maybe there are two so maybe this could be a separate region of the net , which has two has it 's own middle layer . maybe this , has some , funky thing that di if this and this may influence these hidden nodes of the discourse which is maybe something that is , a more general version of the actual phenomenon that you can observe . so things that point towards grad b: so instead of single node , for like , if they said the word " admission fee , or maybe , " how much to enter " other cues . grad b: that would all f funnel into one node that would constitute entrance requirements like that . grad d: it get into plan recognition kinds of things in the discourse . that 's like the bigger , version of it . grad a: exactly . and then maybe there are some discourse acts if they happened before , it 's more for a cue that the person actually wants to get somewhere else and that you are in a route , proceeding past these things , so this would be just something that where you want to pass it . is that it ? however these are then the nodes , the observed nodes , for your middle layer . so this again points to " final destination " , " doing business " , " tourist hurry " and . and so then we can say , " ok . we have a whole region " in a e grad a: , exactly . and this is just then just one . so e because at the end the more we add , the more spider - web - ish it 's going to become in the middle and the more of hand editing . it 's going to get very ugly . but with this way we could say " ok , these are the discourse phenomena . they ra may have there own hidden layer that points to some of the real hidden layer , or the general hidden layer . and the same we will be able to do for syntactic information , the verbs used , the object types used , modifiers . and maybe there 's a hidden layer for that . and and . then we have context . grad c: so essentially a lot of those nodes can be expanded into little bayes - nets of their own . grad b: one thing that 's been bugging me when i more i look at this is that the i , the fact that the there 's a complete separation between the observed features and in the output . , it makes it cleaner , but then . if the discourse does grad b: , the " discourse admission fee " node seems like it should point directly to the or increase the probability of " enter directly " versus " going there via tourist " . grad c: or we could like add more , middle nodes . like we could add a node like do they want to enter it , which is affected by admission fee and by whether it 's closed and by whether it has a door . so it 's like there are those are the two options . either like make an arrow directly or put a new node . grad a: and if it if you do it if you could connect it too hard you may get such phenomenon that like " so how much has it cost to enter ? " and the answer is two hundred fifty dollars , and then the persons says " i want to see it . " meaning " it 's way out of my budget " grad a: , nothing comes to mind . without thinking too hard . , maybe , opera premiers . grad d: or maybe , a famous restaurant . or , i . there are various things that you might w not want to eat a meal there but your own table . grad a: that the h nothing beats the admission charge prices in japan . so there , two hundred dollars is moderate for getting into a discotheque . then again , everything else is free then once you 're ins in there . grad a: food and drink and . so . but i , i we can something somebody can have discussed the admission fee and u the answer is s if we , still , based on that result is never going to enter that building . because it 's just too expensive . grad b: so the discourse refers to " admission fee " but it just turns out that they change their mind in the middle of the discourse . grad d: you have to have some notion of not just there 's a there 's change across several turns of discourse so i how if any of this was discussed but how i if it all this is going to interact with whatever general , other discourse processing that might be happen . grad a: it works like this . the , . the first thing we get is that already the intention is t they tried to figure out the intention , simply by parsing it . and this m wo n't differentiate between all modes , but at least it 'll tell us " ok here we have something that somebody that wants to go someplace , now it 's up for us to figure out what going there is happening , and , if the discourse takes a couple of turns before everything all the information is needed , what happens is the parser parses it and then it 's handed on to the discourse history which is , o one of the most elaborate modules . it 's it 's actually the whole memory of the entire system , that knows what wh who said what , which was what was presented . it helps an anaphora resolution and it fills in all the structures that are omitted , so , because you say " ok , how can i get to the castle ? , how much is it ? and " i would like to g let 's do it " and . so even without an a ana anaphora somebody has to make that information we had earlier on is still here . grad a: because not every module keeps a memory of everything that happened . so whenever the , person is not actually rejecting what happened before , so as in " no i really do n't want to see that movie . i 'd rather stay home and watch tv " what movie was selected in what cinema in what town is going to be added into the disc into the representations every di at each dialogue step , by the discourse model , that 's what it 's called . and , it does some help in the anaphora resolution and it also helps in coordinating the gesture screen issues . so a person pointing to something on the screen , the discourse model actually stores what was presented at what location on the s on the screen so it 's a rather huge thing but we can it has a very clear interface . we can query it whether admission fees were discussed in the last turn and the turn before that or how deep we want to search which is a question . how deep do we want to sear , but we should try to keep in mind that , we 're doing this for research , so we should find a limit that 's reasonable and not go , all the way back to adam and eve . , did that person ever discuss admissions fee fees in his entire life ? and the dialogues are pretty concise and anyway . grad d: so one thing that might be helpful which is implicit in the use of " admission fee discussion " as a cue for entry , is thinking about the plans that various people might have . like all the different general schemas that they might be following this person is , finding out information about this thing in order to go in as a tourist or finding out how to get to this place in order to do business . , because then anything that 's a cue for one of the steps would be slight evidence for that overall plan . , i . they 're in non in more traditional ai kinds of plan recognition things you have , some idea at each turn of agent doing something , ok , wha what plans is this a consistent with ? and then get s some more information and then you see " here 's a sequence that this roughly fits into " . it it might be useful here too . i how you 'd have to figure out what knowl what knowledge representation would work for that . grad a: the u it 's in the these plan schemas . there are some of them are extremely elaborate , what do you need to buy a ticket ? and it 's fifty steps , just for buying a ticket at a ticket counter , and maybe that 's helpful to look at it to look at those . it 's amazing what human beings can do . w when we talked we had the example , of you being a s a person on a ticket counter working at railway station and somebody r runs up to you with a suitcase in his hands , says new york and you say track seven , and it 's because that person actually is following , you execute a whole plan of going through a hundred and fifty steps , without any information other than " new york " , inferring everything from the context . so , works . , even though there is probably no train from here to new york , grad a: but it 's possible . , no you probably have to transfer also somewhere else . is that t san francisco , chicago ? is that possible ? grad b: one time i saw a report on trains , and there is a l i if there was a line that went from somewhere , maybe it was sacramento to chicago , but there was like a california to chicago line of some sort . i could be wrong though . it was a while ago . grad a: it never went all the way , you always had to change trains at omaha , grad c: and there 's much more of them . , they 're , it 's way better grad a: i used amtrak quite a bit on the east coast and i was surprised . it was actually ok . , on boston new york , new york rhode island , whatever , grad d: just kidding . so tha that structure that robert drew on the board was like more , cue - type - based , right , here 's like we 're gon na segment off a bit of that comes from discourse and then some of the things we 're talking about here are more , we mentioned maybe if they talk about , entering or som like they might be more task - based . so i if there there 's some m more than one way of organizing the variables into something grad a: that what you guys did is really nicely sketching out different tasks , and maybe some of their conditions . one task is more likely you 're in a hurry when you do that s doing business , and less in a hurry when you 're a tourist tourists may have never have final destinations , because they are eternally traveling around so maybe what happened what might happen is that we do get this task - based middle layer , and then we 'll get these sub - middle layers , that are more cue - based . grad a: nah ? might be might be a dichotomy of the world . so , i suggest w to for to proceed with this in the sense that maybe throughout this week the three of us will talk some more about maybe segmenting off different regions , and we make up some toy a observable " nodes " is that what th grad a: feature ma make up some features for those identify four regions , maybe make up some features for each region and , and middle layer for those . and then these should then connect somehow to the more plan - based deep space grad a: . the - they will be aud ad - hoc for some time to come . grad c: , this is like the probabilities and all are completely ad - hoc . we need to look of them . but , they 're even like , close to the end we were like , we were like really ad - hoc . grad c: right ? cuz if it 's like , if it 's four things coming in , and , say , some of them have like three possibilities and all that . so you 're thinking like a hundred and forty four possible things numbers to enter , grad b: some of them are completely absurd too , like they want to enter , but it 's closed , grad b: it 's night time , there are tourists and all this weird happens at the line up and you 're like grad c: , the only like possible interpretation is that they are like come here just to rob the museum to that effect . grad d: in which case you 're supposed to alert the authorities , and see appropriate action . grad c: , another thing to do , is also to , i to ask around people about other bayes - net packages . is srini gon na be at the meeting tomorrow , do ? grad b: i have n't j jerry never sent out a sent out an email , did he , ever ? grad c: but he mentioned at the last meeting that someone was going to be talking , i forget who . grad a: that will be one thing we could do . i actually , have , also we can , start looking at the smartkom tables and i will i actually wanted to show that to you guys now but . grad a: , no i actually made a mistake because it fell asleep and when linux falls asleep on my machine it 's it does n't wake up ever , so i had to reboot grad a: and if i reboot without a network , i will not be able to start smartkom , because i need to have a network . so we 'll do that t maybe grad c: but . but once you start sart start smartkom you can be on you do n't have to be on a network anymore . , interesting . grad a: it looks up some that , is that is in the written by the operating system only if it if you get a dhcp request , so it , my computer does not know its ip address , . so . unless it boots up with networking . grad a: and i do n't have an ip address , they ca n't look up they who localhost is , and . always fun . but it 's a , simple solution . we can just , go downstairs and look at this , but maybe not today . the other thing i will ok , i have to report , data collection . we interviewed fey , she 's willing to do it , meaning be the wizard for the data collection , also maybe transcribe a little bit , if she has to , but also recruiting subjects , organizing them , and . so that looks good . jerry however suggested that we should have a trial run with her , see whether she can actually do all the spontaneous , eloquent and creativeness that we expect of the wizard . and i talked to liz about this and it looks as if friday afternoon will be the time when we have a first trial run for the data . grad c: who will there be a is one is you one of you gon na be the subject ? like are you grad a: liz also volunteered to be the first subject , which might be even better than us guys . grad a: if we do need her for the technical , then one of you has to jump in . grad b: i like how we ' ve you guys have successfully narrowed it down . is one of you going to be the subject ? is one of you jump in . grad c: figured it has to be someone who 's , familiar enough with the data to problems for the wizard , so we can , see if they 're good . grad d: , in this case it 's a p it 's a testing of the wizard rather than of the subject . grad a: yes w we would like to test the wizard , but , if we take a subject that is completely unfamiliar with the task , or any of the set up , we get a more realistic grad d: that might be a little unfair . i ' m if we , you think there 's a chance we might need liz for , whatever , the technical side of things ? i ' m we can get other people around who anything , if we want another subject . like drag ben into it . although he might problems so , is it a experimental setup for the , data collection ready determined ? grad b: i like that . test the wizard . i want that on a t - shirt . grad a: it 's experimental setup u on the technical issue yes , except we st we still need a recording device for the wizard , just a tape recorder that 's running in a room . but in terms of specifying the scenario , we ' ve gotten a little further but we wanted to until we know who is the wizard , and have the wizard partake in the ultimate definition probe . so so if on friday it turns out that she really likes it and we really like her , then nothing should stop us from sitting down next week and getting all the details completely figured out . grad d: so the ideal task , will have whatever i how much the structure of the evolving bayes - net will af affect like we wanna be able to collect as much of the variables that are needed for that , grad a: bu - e i ' m even this this tango , enter , vista is , itself , an ad - hoc scenario . the the basic u idea behind the data collection was the following . the data we get from munich is very command line , simple linguistic . hardly anything complicated . no metaphors whatsoever . not a rich language . so we wanted just to collect data , to get that elicits more , that elicits richer language . and we actually did not want to constrain it too much , just see what people say . and then maybe we 'll discover the phenomenon the phenomena that we want to solve , with whatever engine we come up with . so this is a parallel track , there they hopefully meet , grad a: it should tell us , what phenomenon could occur , it should tell us also maybe something about the difference between people who think they speak to a computer versus people who think they speak to a human being and the differences there . so it may get us some more information on the human - machine pragmatics , that no one knows anything about , as of yesterday . and nothing has changed since then , . and secondly , now that we have started to lick blood with this , and especially since johno ca n't stop tango - ing , we may actually include , those intentions . so now we should maybe have at least one navigational task with explicit not ex it 's implicit that the person wants to enter , and maybe some task where it 's more or less explicit that the person wants to take a picture , or see it . so that we can label it . , that 's how we get a corpus that we can label . whereas , if we 'd just get data we 'd never they actually wanted , we 'd get no cues .
the group discussed the first version of the bayes-net used to work out a user's intentions when asking for directions from a navigation device. three intentions were identified: vista ( to view ) , enter ( to visit ) and tango ( to approach ). the structure of the belief-net comprises , firstly , a feature layer , which includes linguistic , discourse and world knowledge information that can be gleaned from the data. it is possible for these variables to form thematic clusters( eg "entrance" , "type of object" , "verb" ) , each one with a separate middle layer. these feed , in turn , into the main middle layer , that defines more general hidden variables , such as the tourist/business status of the user. the feature layer can end up being cue-based , while the middle layers task-based. the latter determine the final probability of each intention in the output layer. this first model of the belief-net was built in javabayes , since it is a free package , has a graphical interface , and it can take xml files as input. at this stage , all the actual probabilities are ad-hoc and hand-coded. however , there has been progress in the design and organisation of experiments , that will eventually provide data more useful and appropriate for this task. it is necessary for the belief-net to have at least one layer of nodes between the features and the final output. this makes the structure more flexible in terms of coding feature-layer probabilities. another technique to systematise the work is the thematic clustering of the features , each cluster forming a bayes-net of each own: for example features like "admission fee" and "opening hours" can feed into an intermediate "entrance" node connecting to the main middle layer. the next stage is to refine the set of feature nodes and identify possible clusters. although , in theory , traditional ai plan recognition techniques could also be helpful for inferring intentions , the schemas involved are too elaborate for this task. further work also includes discussing the possible advantages of bayes-net packages , other than javabayes , with experts at icsi. if they continue using javabayes , a script to help with the inputting of probabilities in the nodes is needed , as the in-built method is cumbersome. finally , it was decided that at least some of the experiments designed for the new data collection initiative will factor in the intentions studied in the current task. the set of cues that form the feature nodes is not well-defined yet. especially with lexical cues ( verbs , modifiers etc ) , no one offered specific intuitions as to how they might contribute to the inference of intentions. other features , like "admission fee" , may be intuitively linked with one of the outputs ( enter ) , however , any probabilities are coded in an ad-hoc fashion and are by no means realistic. cases like this , where feature and output seem to be linked directly , bring the necessity of a middle layer in the belief-net to question. nevertheless , not having a middle layer would not allow for shifts in the discourse and would make the setting of probabilities and manipulation of the belief-net clumsy. some issues with the use of javabayes also arose: the addition of new variables in an existing node overwrites all previous settings , and the native text file where the probability tables are set is not easy to read; this makes adding and changing variables and nodes problematic. finally , it is unclear how much learning can be done on the created nets. there was a demonstration of the structure and the function of a toy version of the belief-net for the intentionality task. the features nodes include things like prosody , discourse , verb choice , "landmark-iness" of a building , time of day and whether the admission fee was discussed. the values these nodes take feed into the middle layer nodes identified as hidden variables of the user/device interaction , such as whether the user is on tour , running an errand or in a hurry. these , in turn , help infer whether the user wants to see , enter or simply approach a building. the set of features nodes is derived from linguistic cues , world knowledge and discourse history. smartkom , although it does not code for intentions as specified in this task , provides a model of the discourse , which can be useful for the detection of features through querying and anaphora resolution. experiments for the collection of new data will start soon , since someone who will recruit subjects and help run the experiments has already been hired and the designing of the experiments has also progressed significantly.
###dialogue: grad b: , the remote will do it ok . cuz i ' m already up there ? grad b: so , let 's see . which one of these buttons will do this for me ? aha ! grad c: , " the search for the middle layer " . it 's talks about it just refers to the fact that one of main things we had to do was to decide what the intermediate nodes were , grad a: but if you really want to find out what it 's about you have to click on the little light bulb . grad b: although i ' ve never i what the light bulb is for . i did n't i install that into my powerpoint presentation . grad a: it opens the assistant that tells you that the font type is too small . do you wanna try ? grad c: can you maximize the window so all that on the side is n't does n't appear ? grad b: do that , but then i have to end the presentation in the middle so go back to open up grad b: is that better ? i 'll also get rid of this " click to add notes " . grad b: so then the features we decided or we decided we were talked about , right ? the prosody , the discourse , verb choice . we had a list of things like " to go " and " to visit " and what not . the " landmark - iness " of i knew you 'd like that . grad b: , of a building . whether the and this i we actually have a separate feature but i decided to put it on the same line for space . walls which we can look up because if you 're gon na get real close to a building in the tango mode , right , there 's got ta be a reason for it . and it 's either because you 're in route to something else or you wanna look at the walls . the context , which in this case we ' ve limited to " business person " , " tourist " , or " unknown " , the time of day , and " open to suggestions " , is n't actually a feature . it 's " we are open to suggestions . " grad d: right . can ask the walls part of it is that , in this particular domain you said be i it could be on two different lines but are you saying that in this particular domain it happens the that landmark - iness cor is correlated with grad b: i either could put " walls " on its own line or " open to suggestions " off the slide . grad b: but if it 's architecturally significant you might be able to see it from like you m might be able to " vista " it , and be able to grad b: , versus , like , i was at this place in europe where they had little carvings of , like , dead people on the walls . i do n't remember w it was a long time ago . grad b: but if you looked at it real close , you could see the in intricacy of the walls . grad a: there is a term that 's often used . that 's " saliency " , or the " salience " of an object . and i was just wondering whether that 's the same as what you describe as " landmark - iness " . but it 's really not . an object can be very salient but not a landmark . grad d: not a landmark . there 's landmark for , touristic reasons and landmark for i navigational reasons . grad d: but you can imagine maybe wanting the oth both kinds of things there for different , goals . right ? grad b: but tourist - y landmarks also happen to be would n't could n't they also be they 're not exclusive groups , are they ? like non - tourist - y landmarks and grad b: ok , so our initial idea was not very satisfying , because our initial idea was all the features pointing to the output node . grad b: and , so we reasons being , it 'd be a pain to set up all the probabilities for that . if we moved onto the next step and did learning of some sort , according bhaskara we 'd be handicapped . i belief - nets very . grad c: usually , , n if you have n features , then it 's two to the n or exponential in n . grad b: so then our next idea was to add a middle layer , so the thinking behind that was we have the features that we ' ve drawn from the communication of some like , the someone s the person at the screen is trying to communicate some abstract idea , like " i ' m " the abstract idea being " i am a tourist i want to go to this place . " right ? so we 're gon na set up features along the lines of where they want to go and what they ' ve said previously and whatnot . and then we have the means that they should use . but the middle thing , we were thinking along the lines of maybe trying to figure out , like , the concept of whether they 're a tourist or whether they 're running an errand like that along those lines . or yes , we could things we could n't extract the from the data , the hidden variables . yes , good . so then the hidden variables hair variables we came up with were whether someone was on a tour , running an errand , or whether they were in a hurry , because we were thinking , if they were in a hurry there 'd be less likely to like or th grad c: want to do vista , right ? because if you want to view things you would n't be in a hurry . grad b: or they might be more likely to be using the place that they want to go to as a like a navigational point to go to another place . whether the destination was their final destination , whether the destination was closed . those are all and then " let 's look at the belief - net " ok . so that means that i should switch to the other program . right now it 's still in a toy version of it , because we did n't know the probabilities of or i 'll talk about it when i get the picture up . grad b: so this right what we let 's see . what happens if i maximize this ? there we go . so . the mode has three different outputs . the probability whether the probability of a vista , tango , or enter . the " context " , we simplified . it 's just the businessman , the tourist , unknown . verb used is actually personally amusing mainly because it 's just whether the verb is a tango verb , an enter verb , or a vista verb . grad c: not . that 's that needs a lot of work . but that would ' ve made the probably significantly be more complicated to enter , grad c: so we decided that for the purposes of this it 'd be simpler to just have three verbs . grad b: why do n't you mention things about this , bhaskara , that i am not that are not coming to my mind right now . grad c: ok , so note the four nodes down there , the things that are not directly extracted . actually , the five things . the " closed " is also not directly extracted i , from the grad c: right , so f but the other ones , the final destination , the whether they 're doing business , whether they 're in a hurry , and whether they 're tourists , that thing is all probabilistically depends on the other things . grad c: so we have n't , managed like we do n't have nodes for " discourse " and " parse " , although like in some sense they are parts of this belief - net . but the idea is that we just extract those features from them , so we do n't actually have a node for the entire parse , because we 'd never do inference on it anyway , so . grad d: so some of the top row of things what 's what 's " disc admission fee " ? grad c: whether they discuss the admission fees . so we looked at the data and in a lot of data people were saying things like " can i get to this place ? " what is the admission fee ? . so that 's like a huge clue that they 're trying to enter the place rather than to tango or vista , grad b: there were there 'd be other things besides just the admission fee , but , we did n't have grad d: so there are certain cues that are very strong either lexical or topic - based , concept cues grad d: for one of those . and then in that second row or whatever that row of time of day through that so all of those some of them come from the utterance and some of them are either world knowledge or situational things . so that you have no distinction between those and grad a: . i would actually suggest we go through this one more time so we all , agree on what the meaning of these things is at the moment and maybe what changes we grad b: , th so one thing i ' m unsure about , is how we have the discus the " admission fee " thing set up . so one thing that we were thinking was by doing the layers like this , we kept things from directly affecting the mode beyond the concept , but you could see perhaps discus the " admission fee " going directly to the mode pointing at " enter " , versus pointing to just at " tourist " , ok ? but we just decided to keep all the things we extracted to point at the middle and then down . grad a: why is the landmark the landmark is facing to the tourists . that 's because we 're talking about landmarks as touristic landmarks not as possible grad c: so let 's see . the variables . disc - " admission fee " is a binary thing , time of day is like morning , afternoon , night . is that the deal ? grad b: that 's how we have it currently set up , but it could be , based upon hour grad c: normally context will include a huge amount of information , but , we are just using the particular part of the context which consists of the switch that they flick to indicate whether they 're a tourist or not , i . grad c: so so it 's not really all of context . similarly prosody is not all of prosody but simply for our purposes whether or not they appear tense or relaxed . grad a: that 's very , ? the the so the context is a switch between tourist or non - tourist ? grad d: so final dest so it seems like that would really help you for doing business versus tourist , grad d: but so the context being , e i if that question 's in general , are you the ar are do they allow business people to be doing non - business things at the moment ? grad c: , right . so then landmark is verb used is like , right now we only have three values , but in general they would be a probability distribution over all verbs . rather , let me rephrase that . it it can take values in the set of all verbs , that they could possibly use . " walls " is binary , closed is binary final destination , again , all those are binary i . and " mode " is one of three things . grad c: walls is something that we extract from our world knowledge . , a . it is binary . grad a: so we can either be in a hurry or not , but we can not be in a medium hurry at the moment ? grad c: , we to do that we would add another value for that . and that would require s updating the probability distribution for " mode " as . because it would now have to like take that possibility into account . grad d: take a conti so , this will happen when we think more about the kinds of verbs that are used in each cases grad d: but you can imagine that it 's verb plus various other things that are also not in the bottom layer that would help you like it 's a conjunction of , i , the verb used and some other that would determine grad d: usually . i if that 's always the case i have n't looked at the data as much as you guys have . so . grad a: that 's always warping on something some entity , and maybe at this stage we will we do want to get modifiers in there grad c: we can do a demo in the sense that we can , just ob observe the fact that this will , do inference . grad c: so we can , set some of the nodes and then try to find the probability of other nodes . grad c: just se set a few of them . you do n't have to do the whole thing that we did last time . just like , maybe the fact that they use a certain verb actually forget the verb . just i , say they discussed the admission fee and the place has walls grad c: no that that does n't it 's not really consistent . they do n't discuss the admission fee . make that false . grad b: one thing that bugs me about javabayes is you have to click that and do this . grad c: so that is the probability that they 're entering , vista - ing or tango - ing . and grad b: if it 's night time , they have not discussed admission fee , and the n walls are . so , . i that makes sense . the reason i say the demo does n't work very is yesterday we observed everything in favor of taking a tour , and it came up as " tango " , over and over again . we could n't we could n't figure out how to turn it off of " tango " . grad c: like , we hand - tuned the probabilities , we were like " , if the person does this and this , let 's say forty percent for this , fifty per " like , . so that 's gon na happen . grad a: however , it the purpose was not really , at this stage , to come up with meaningful probabilities but to get thinking about that hidden middle layer . and so th and grad b: we would actually i once we look at the data more we 'll get more hidden nodes , but i 'd like to see more . not because it would expedite the probabilities , cuz it would n't . it would actually slow that down tremendously . grad b: no , we should have exponentially more middle nodes than features we ' ve extracted . i ' m ju i ' m just jo grad d: so . are " doing business " versus " tourist " they refer to your current task . like like current thing you want to do at this moment . grad c: . , that 's that 's an interesting point . whether you 're it 's whether it 's not grad c: it 's more like " are you are tourist ? are you in ham - like heidelberg for a " grad c: that 's a different thing . what if the context , which is not set , but still they say things like , " i want to go , see the castle and , et cetera . " grad a: so if you run out of cash as a tourist , and you need to go to the at grad d: , i see , you may have a task . wh you have to go get money and so you are doing business at that stage . grad c: and that 'll affect whether you want to enter or you if you kinda thing . grad d: so the " tourists " node should be , very consistent with the context node . if you say that 's more their in general what their background is . grad c: this context node is a bit of a like in d do we wanna have like it 's grad d: are you assuming that or not ? like is that to be if that 's accurate then that would determine tourist node . grad c: if the context were to set one way or another , that like strongly , says something about whether or not they 're tourists . so what 's interesting is when it 's not when it 's set to " unknown " . grad c: right now we have n't observed it , so i it 's averaging over all those three possibilities . but yes , you can set it to un " unknown " . grad a: and if we now do leave everything else as is the results should be the same , grad c: no , because we th - the way we set the probabilities might not have it 's an issue , like grad c: it is . so the issue is that in belief - nets , it 's not common to do what we did of like having , a d bunch of values and then " unknown " as an actual value . what 's common is you just like do n't observe the variable , right , and then just marginalizes but we did n't do this because we felt that there 'd i we were thinking in terms of a switch that actually grad c: but i y what the right thing is to do for that . i ' m not i if i am happy with the way it is . grad a: why do n't we can we , how long would it take to add another node on the observatory and , play around with it ? grad a: let 's just say make it really simple . if we create something that would be so th some things can be landmarks in your sense but they can never be entered ? so s a statue . grad a: so maybe we wanna have " landmark " meaning now " enterable landmark " versus , something that 's simply just a vista point , . grad c: so it 's addressing a variable that 's " enterable or not " . so like an " enterable , question mark " . grad b: also , did n't we have a size as one ? the size of the landmark . grad c: . not when we were doing this , but i at some point we did . grad b: for some reason i had that ok , that was a thought that i had at one point but then went away . grad c: so you want to have a node for like whether or not it can be entered ? grad a: , if we include that , accessibility , is it can it be entered ? then , this is binary as . and then , there 's also the question whether it may be entered . in the sense that , if it 's tom the house of tom cruise , it 's enterable but you may not enter it . you 're not allowed to . unless you are , whatever , his divorce lawyer . and and these are very observable from the ontology things . grad b: way does it actually help to distinguish between those two cases though ? whether it 's practically speaking enterable , or actually physically enterable or not ? grad a: y if you 're running an errand you maybe more likely to be able to enter places that are usually not al w you 're not usually not allowed to m grad d: it seems like it would for , determining whether they wanna go into it or not . grad a: let 's get this b clearer . s so it 's matrix between if it 's not enterable , period . grad b: whether it 's a public building , and whether it 's actually has a door . grad b: so tom cruise 's house is not a public building but it has a door . grad b: ok , sh explain to me why it 's necessary to distinguish between whether something has a door and is not public . or , if something it seems like it 's equivalent to say that it does n't have a door a and it or " not public " and " not a door " are equivalent things , it seems like in practice . grad a: right . so we would have what does it mean , then , that we have to we have an object type statue . that really is an object type . so there is there 's gon na be a bunch of statues . and then we have , an object type , that 's a hotel . how about hotels ? so , the most famous building in heidelberg is actually a hotel . it 's the hotel zum ritter , which is the only renaissance building in heidelberg that was left after the big destruction and for the thirty years war , blah - blah . grad a: it has wonderful walls . - and lots of detail , c and carvings , engravings and , so . but , it 's still an unlikely candidate for the tango mode i must say . but . so s so if you are a d it 's very tricky . so i your question is so far i have no really arg no real argument why to differentiate between statues as statues and houses of celebrities , from that point of view . let 's do a can we add , just so see how it 's done , a " has door " property or ? grad a: , , it might affect actually it 's it would n't affect any of our nodes , grad b: you could affect theoretically you could affect " doing business " with " has door " . grad c: i if javabayes is about that . it might be that if you add a new thing pointing to a variable , you just like it just overwrites everything . but you can check . grad b: , we have it saved . so . we can rel open it up again . grad c: that 's fine , but we have to see the function now . has it become all point fives or not ? grad b: so this is " has door " , true , false . that 's acceptable . and i want to edit the function going to that , no . grad c: what would be if it is if it just like kept the old function for either value but . nope . did n't do it . grad a: ok , so just dis dismiss everything . close it and load up the old state so it does n't screw that up . maybe you can read in ? grad d: yes . really i ha i ' ve i have n't used it a lot and i have n't used it in the last many months so , we can ask someone . grad c: it might be worth asking around . like , we looked at a page that had like a bunch of grad c: in a way this is a lot of good features in java it 's cra has a gui and it 's i those are the main two things . it does learning , it has grad b: no it does n't , actually . i did n't think it did learning . maybe it did a little bit of learning , i do n't remember . grad c: but , maybe another thing that but its interface is not the greatest . so . grad a: command line . what is the c code ? can w can we see that ? how do you write the code grad c: there is actually a text file that you can edit . but it 's you do n't have to do that . grad c: it 'll ask you what it wants what you want to open it with and see what bat , i . grad b: the carriage returns on some of them . how they get " auto - fills " i , grad c: that 's how actual probability tables are specified . as , like , lists of numbers . so theoretically you could edit that . grad c: we can maybe write an interface th for entering probability distributions easily , something like a little script . that might be worth it . grad d: i actually seem to recall srini complaining about something to do with entering probability so this is probably grad c: i if it actually manipulate the source , though . that might be a bit complicated . it might be simpler to just have a script that , it 's , like , friendly , grad a: but if th if there is an xml file that or format that it can also read it just reads this , when it starts . grad b: i know there is an i was looking on the we web page and he 's updated it for an xml version of i bayes - nets . there 's a bayes - net spec for in xml . grad c: he 's like this guy has ? the javabayes guy ? so but , e he does n't use it . so in what sense has he updated it ? grad b: because at least the i could have misread the web page , i have a habit of doing that , but . grad b: do i have more slides ? yes , one more . future work . every presentation have a should have a " future work " slide . but it 's we already talked about all this , so . grad c: . the additional thing is i learning the probabilities , also . e that 's maybe , i if grad b: and if you have a presentation that does n't have something that does n't work , then you have " what i learned " , as a slide . grad b: you could . my first approach failed . what i learned . ok , so that our presentation 's finished . grad b: i like about these meetings is one person will nod , and then the next person will nod , and then it just goes all the way around the room . grad b: no i earlier i went { nonvocalsound } and bhaskara went { nonvocalsound } and you did it . you did it . grad d: so a more general thing than " discussed admission fee " , could be i ' m just wondering whether the context , the background context of the discourse might be if there 's a way to define it or maybe generalize it some way , there might be other cues that , say , in the last few utterances there has been something that has strongly associated with say one of the particular modes , i if that might be grad d: , and into that node would be various things that could have specifically come up . grad a: a general strategy here , this is excellent because it gets you thinking along these terms is that maybe we ob we could observe a couple of discourse phenomena such as the admission fee , and something else , that happened in the discourse before . and let 's make those four . and maybe there are two so maybe this could be a separate region of the net , which has two has it 's own middle layer . maybe this , has some , funky thing that di if this and this may influence these hidden nodes of the discourse which is maybe something that is , a more general version of the actual phenomenon that you can observe . so things that point towards grad b: so instead of single node , for like , if they said the word " admission fee , or maybe , " how much to enter " other cues . grad b: that would all f funnel into one node that would constitute entrance requirements like that . grad d: it get into plan recognition kinds of things in the discourse . that 's like the bigger , version of it . grad a: exactly . and then maybe there are some discourse acts if they happened before , it 's more for a cue that the person actually wants to get somewhere else and that you are in a route , proceeding past these things , so this would be just something that where you want to pass it . is that it ? however these are then the nodes , the observed nodes , for your middle layer . so this again points to " final destination " , " doing business " , " tourist hurry " and . and so then we can say , " ok . we have a whole region " in a e grad a: , exactly . and this is just then just one . so e because at the end the more we add , the more spider - web - ish it 's going to become in the middle and the more of hand editing . it 's going to get very ugly . but with this way we could say " ok , these are the discourse phenomena . they ra may have there own hidden layer that points to some of the real hidden layer , or the general hidden layer . and the same we will be able to do for syntactic information , the verbs used , the object types used , modifiers . and maybe there 's a hidden layer for that . and and . then we have context . grad c: so essentially a lot of those nodes can be expanded into little bayes - nets of their own . grad b: one thing that 's been bugging me when i more i look at this is that the i , the fact that the there 's a complete separation between the observed features and in the output . , it makes it cleaner , but then . if the discourse does grad b: , the " discourse admission fee " node seems like it should point directly to the or increase the probability of " enter directly " versus " going there via tourist " . grad c: or we could like add more , middle nodes . like we could add a node like do they want to enter it , which is affected by admission fee and by whether it 's closed and by whether it has a door . so it 's like there are those are the two options . either like make an arrow directly or put a new node . grad a: and if it if you do it if you could connect it too hard you may get such phenomenon that like " so how much has it cost to enter ? " and the answer is two hundred fifty dollars , and then the persons says " i want to see it . " meaning " it 's way out of my budget " grad a: , nothing comes to mind . without thinking too hard . , maybe , opera premiers . grad d: or maybe , a famous restaurant . or , i . there are various things that you might w not want to eat a meal there but your own table . grad a: that the h nothing beats the admission charge prices in japan . so there , two hundred dollars is moderate for getting into a discotheque . then again , everything else is free then once you 're ins in there . grad a: food and drink and . so . but i , i we can something somebody can have discussed the admission fee and u the answer is s if we , still , based on that result is never going to enter that building . because it 's just too expensive . grad b: so the discourse refers to " admission fee " but it just turns out that they change their mind in the middle of the discourse . grad d: you have to have some notion of not just there 's a there 's change across several turns of discourse so i how if any of this was discussed but how i if it all this is going to interact with whatever general , other discourse processing that might be happen . grad a: it works like this . the , . the first thing we get is that already the intention is t they tried to figure out the intention , simply by parsing it . and this m wo n't differentiate between all modes , but at least it 'll tell us " ok here we have something that somebody that wants to go someplace , now it 's up for us to figure out what going there is happening , and , if the discourse takes a couple of turns before everything all the information is needed , what happens is the parser parses it and then it 's handed on to the discourse history which is , o one of the most elaborate modules . it 's it 's actually the whole memory of the entire system , that knows what wh who said what , which was what was presented . it helps an anaphora resolution and it fills in all the structures that are omitted , so , because you say " ok , how can i get to the castle ? , how much is it ? and " i would like to g let 's do it " and . so even without an a ana anaphora somebody has to make that information we had earlier on is still here . grad a: because not every module keeps a memory of everything that happened . so whenever the , person is not actually rejecting what happened before , so as in " no i really do n't want to see that movie . i 'd rather stay home and watch tv " what movie was selected in what cinema in what town is going to be added into the disc into the representations every di at each dialogue step , by the discourse model , that 's what it 's called . and , it does some help in the anaphora resolution and it also helps in coordinating the gesture screen issues . so a person pointing to something on the screen , the discourse model actually stores what was presented at what location on the s on the screen so it 's a rather huge thing but we can it has a very clear interface . we can query it whether admission fees were discussed in the last turn and the turn before that or how deep we want to search which is a question . how deep do we want to sear , but we should try to keep in mind that , we 're doing this for research , so we should find a limit that 's reasonable and not go , all the way back to adam and eve . , did that person ever discuss admissions fee fees in his entire life ? and the dialogues are pretty concise and anyway . grad d: so one thing that might be helpful which is implicit in the use of " admission fee discussion " as a cue for entry , is thinking about the plans that various people might have . like all the different general schemas that they might be following this person is , finding out information about this thing in order to go in as a tourist or finding out how to get to this place in order to do business . , because then anything that 's a cue for one of the steps would be slight evidence for that overall plan . , i . they 're in non in more traditional ai kinds of plan recognition things you have , some idea at each turn of agent doing something , ok , wha what plans is this a consistent with ? and then get s some more information and then you see " here 's a sequence that this roughly fits into " . it it might be useful here too . i how you 'd have to figure out what knowl what knowledge representation would work for that . grad a: the u it 's in the these plan schemas . there are some of them are extremely elaborate , what do you need to buy a ticket ? and it 's fifty steps , just for buying a ticket at a ticket counter , and maybe that 's helpful to look at it to look at those . it 's amazing what human beings can do . w when we talked we had the example , of you being a s a person on a ticket counter working at railway station and somebody r runs up to you with a suitcase in his hands , says new york and you say track seven , and it 's because that person actually is following , you execute a whole plan of going through a hundred and fifty steps , without any information other than " new york " , inferring everything from the context . so , works . , even though there is probably no train from here to new york , grad a: but it 's possible . , no you probably have to transfer also somewhere else . is that t san francisco , chicago ? is that possible ? grad b: one time i saw a report on trains , and there is a l i if there was a line that went from somewhere , maybe it was sacramento to chicago , but there was like a california to chicago line of some sort . i could be wrong though . it was a while ago . grad a: it never went all the way , you always had to change trains at omaha , grad c: and there 's much more of them . , they 're , it 's way better grad a: i used amtrak quite a bit on the east coast and i was surprised . it was actually ok . , on boston new york , new york rhode island , whatever , grad d: just kidding . so tha that structure that robert drew on the board was like more , cue - type - based , right , here 's like we 're gon na segment off a bit of that comes from discourse and then some of the things we 're talking about here are more , we mentioned maybe if they talk about , entering or som like they might be more task - based . so i if there there 's some m more than one way of organizing the variables into something grad a: that what you guys did is really nicely sketching out different tasks , and maybe some of their conditions . one task is more likely you 're in a hurry when you do that s doing business , and less in a hurry when you 're a tourist tourists may have never have final destinations , because they are eternally traveling around so maybe what happened what might happen is that we do get this task - based middle layer , and then we 'll get these sub - middle layers , that are more cue - based . grad a: nah ? might be might be a dichotomy of the world . so , i suggest w to for to proceed with this in the sense that maybe throughout this week the three of us will talk some more about maybe segmenting off different regions , and we make up some toy a observable " nodes " is that what th grad a: feature ma make up some features for those identify four regions , maybe make up some features for each region and , and middle layer for those . and then these should then connect somehow to the more plan - based deep space grad a: . the - they will be aud ad - hoc for some time to come . grad c: , this is like the probabilities and all are completely ad - hoc . we need to look of them . but , they 're even like , close to the end we were like , we were like really ad - hoc . grad c: right ? cuz if it 's like , if it 's four things coming in , and , say , some of them have like three possibilities and all that . so you 're thinking like a hundred and forty four possible things numbers to enter , grad b: some of them are completely absurd too , like they want to enter , but it 's closed , grad b: it 's night time , there are tourists and all this weird happens at the line up and you 're like grad c: , the only like possible interpretation is that they are like come here just to rob the museum to that effect . grad d: in which case you 're supposed to alert the authorities , and see appropriate action . grad c: , another thing to do , is also to , i to ask around people about other bayes - net packages . is srini gon na be at the meeting tomorrow , do ? grad b: i have n't j jerry never sent out a sent out an email , did he , ever ? grad c: but he mentioned at the last meeting that someone was going to be talking , i forget who . grad a: that will be one thing we could do . i actually , have , also we can , start looking at the smartkom tables and i will i actually wanted to show that to you guys now but . grad a: , no i actually made a mistake because it fell asleep and when linux falls asleep on my machine it 's it does n't wake up ever , so i had to reboot grad a: and if i reboot without a network , i will not be able to start smartkom , because i need to have a network . so we 'll do that t maybe grad c: but . but once you start sart start smartkom you can be on you do n't have to be on a network anymore . , interesting . grad a: it looks up some that , is that is in the written by the operating system only if it if you get a dhcp request , so it , my computer does not know its ip address , . so . unless it boots up with networking . grad a: and i do n't have an ip address , they ca n't look up they who localhost is , and . always fun . but it 's a , simple solution . we can just , go downstairs and look at this , but maybe not today . the other thing i will ok , i have to report , data collection . we interviewed fey , she 's willing to do it , meaning be the wizard for the data collection , also maybe transcribe a little bit , if she has to , but also recruiting subjects , organizing them , and . so that looks good . jerry however suggested that we should have a trial run with her , see whether she can actually do all the spontaneous , eloquent and creativeness that we expect of the wizard . and i talked to liz about this and it looks as if friday afternoon will be the time when we have a first trial run for the data . grad c: who will there be a is one is you one of you gon na be the subject ? like are you grad a: liz also volunteered to be the first subject , which might be even better than us guys . grad a: if we do need her for the technical , then one of you has to jump in . grad b: i like how we ' ve you guys have successfully narrowed it down . is one of you going to be the subject ? is one of you jump in . grad c: figured it has to be someone who 's , familiar enough with the data to problems for the wizard , so we can , see if they 're good . grad d: , in this case it 's a p it 's a testing of the wizard rather than of the subject . grad a: yes w we would like to test the wizard , but , if we take a subject that is completely unfamiliar with the task , or any of the set up , we get a more realistic grad d: that might be a little unfair . i ' m if we , you think there 's a chance we might need liz for , whatever , the technical side of things ? i ' m we can get other people around who anything , if we want another subject . like drag ben into it . although he might problems so , is it a experimental setup for the , data collection ready determined ? grad b: i like that . test the wizard . i want that on a t - shirt . grad a: it 's experimental setup u on the technical issue yes , except we st we still need a recording device for the wizard , just a tape recorder that 's running in a room . but in terms of specifying the scenario , we ' ve gotten a little further but we wanted to until we know who is the wizard , and have the wizard partake in the ultimate definition probe . so so if on friday it turns out that she really likes it and we really like her , then nothing should stop us from sitting down next week and getting all the details completely figured out . grad d: so the ideal task , will have whatever i how much the structure of the evolving bayes - net will af affect like we wanna be able to collect as much of the variables that are needed for that , grad a: bu - e i ' m even this this tango , enter , vista is , itself , an ad - hoc scenario . the the basic u idea behind the data collection was the following . the data we get from munich is very command line , simple linguistic . hardly anything complicated . no metaphors whatsoever . not a rich language . so we wanted just to collect data , to get that elicits more , that elicits richer language . and we actually did not want to constrain it too much , just see what people say . and then maybe we 'll discover the phenomenon the phenomena that we want to solve , with whatever engine we come up with . so this is a parallel track , there they hopefully meet , grad a: it should tell us , what phenomenon could occur , it should tell us also maybe something about the difference between people who think they speak to a computer versus people who think they speak to a human being and the differences there . so it may get us some more information on the human - machine pragmatics , that no one knows anything about , as of yesterday . and nothing has changed since then , . and secondly , now that we have started to lick blood with this , and especially since johno ca n't stop tango - ing , we may actually include , those intentions . so now we should maybe have at least one navigational task with explicit not ex it 's implicit that the person wants to enter , and maybe some task where it 's more or less explicit that the person wants to take a picture , or see it . so that we can label it . , that 's how we get a corpus that we can label . whereas , if we 'd just get data we 'd never they actually wanted , we 'd get no cues . ###summary: the group discussed the first version of the bayes-net used to work out a user's intentions when asking for directions from a navigation device. three intentions were identified: vista ( to view ) , enter ( to visit ) and tango ( to approach ). the structure of the belief-net comprises , firstly , a feature layer , which includes linguistic , discourse and world knowledge information that can be gleaned from the data. it is possible for these variables to form thematic clusters( eg "entrance" , "type of object" , "verb" ) , each one with a separate middle layer. these feed , in turn , into the main middle layer , that defines more general hidden variables , such as the tourist/business status of the user. the feature layer can end up being cue-based , while the middle layers task-based. the latter determine the final probability of each intention in the output layer. this first model of the belief-net was built in javabayes , since it is a free package , has a graphical interface , and it can take xml files as input. at this stage , all the actual probabilities are ad-hoc and hand-coded. however , there has been progress in the design and organisation of experiments , that will eventually provide data more useful and appropriate for this task. it is necessary for the belief-net to have at least one layer of nodes between the features and the final output. this makes the structure more flexible in terms of coding feature-layer probabilities. another technique to systematise the work is the thematic clustering of the features , each cluster forming a bayes-net of each own: for example features like "admission fee" and "opening hours" can feed into an intermediate "entrance" node connecting to the main middle layer. the next stage is to refine the set of feature nodes and identify possible clusters. although , in theory , traditional ai plan recognition techniques could also be helpful for inferring intentions , the schemas involved are too elaborate for this task. further work also includes discussing the possible advantages of bayes-net packages , other than javabayes , with experts at icsi. if they continue using javabayes , a script to help with the inputting of probabilities in the nodes is needed , as the in-built method is cumbersome. finally , it was decided that at least some of the experiments designed for the new data collection initiative will factor in the intentions studied in the current task. the set of cues that form the feature nodes is not well-defined yet. especially with lexical cues ( verbs , modifiers etc ) , no one offered specific intuitions as to how they might contribute to the inference of intentions. other features , like "admission fee" , may be intuitively linked with one of the outputs ( enter ) , however , any probabilities are coded in an ad-hoc fashion and are by no means realistic. cases like this , where feature and output seem to be linked directly , bring the necessity of a middle layer in the belief-net to question. nevertheless , not having a middle layer would not allow for shifts in the discourse and would make the setting of probabilities and manipulation of the belief-net clumsy. some issues with the use of javabayes also arose: the addition of new variables in an existing node overwrites all previous settings , and the native text file where the probability tables are set is not easy to read; this makes adding and changing variables and nodes problematic. finally , it is unclear how much learning can be done on the created nets. there was a demonstration of the structure and the function of a toy version of the belief-net for the intentionality task. the features nodes include things like prosody , discourse , verb choice , "landmark-iness" of a building , time of day and whether the admission fee was discussed. the values these nodes take feed into the middle layer nodes identified as hidden variables of the user/device interaction , such as whether the user is on tour , running an errand or in a hurry. these , in turn , help infer whether the user wants to see , enter or simply approach a building. the set of features nodes is derived from linguistic cues , world knowledge and discourse history. smartkom , although it does not code for intentions as specified in this task , provides a model of the discourse , which can be useful for the detection of features through querying and anaphora resolution. experiments for the collection of new data will start soon , since someone who will recruit subjects and help run the experiments has already been hired and the designing of the experiments has also progressed significantly.
0
grad a: not pre - doing everything . the lunch went a little later than i was expecting , chuck . postdoc f: , i ' m i sent a couple of items . they 're they 're practical . postdoc f: , maybe { nonvocalsound } raise the issue of microphone , procedures with reference to the cleanliness of the recordings . postdoc f: and then maybe { nonvocalsound } ask , th , these guys . the we have great , p steps forward in terms of the nonspeech - speech pre - segmenting of the signal . phd b: since , since i have to leave as usual at three - thirty , can we do the interesting first ? professor g: , and , the other thing , which i 'll just say very briefly that maybe relates to that a little bit , which is that , , one of the suggestions that came up in a brief meeting i had the other day when i was in spain with , manolo pardo and javier , ferreiros , who was here before , was , why not start with what they had before but add in the non - silence boundaries . so , in what javier did before when they were doing , h he was looking for , speaker change points . professor g: as a simplification , he originally did this only using silence as , a putative , speaker change point . and , he did not , say , look at points where you were changing broad sp , phonetic class , . and for broadcast news , that was fine . here it 's not . and , so one of the things that they were pushing in d in discussing with me is , w why are you spending so much time , on the , feature issue , when perhaps if you deal with what you were using before and then just broadened it a bit , instead of just ta using silence as putative change point also ? professor g: so then you ' ve got you already have the super - structure with gaussians and h - , simple h m ms and . and you might so there was a little bit of a difference of opinion because that it was it 's interesting to look at what features are useful . but , on the other hand i saw that the they had a good point that , if we had something that worked for many cases before , maybe starting from there a little bit because ultimately we 're gon na end up with some s su structure like that , where you have some simple hmm and you 're testing the hypothesis that , there is a change . so so anyway , reporting that . professor g: but , so . , why do n't we do the speech - nonspeech discussion ? phd c: , so , what we did so far was using the mixed file to detect s speech or nonspeech portions in that . phd c: and what i did so far is used our old munich system , which is an hmm - ba based system with gaussian mixtures for s speech and nonspeech . and it was a system which used only one gaussian for silence and one gaussian for speech . and now i added , multi - mixture possibility for speech and nonspeech . and i did some training on one dialogue , which was transcribed by . we we did a nons s speech - nonspeech transcription . phd c: adam , dave , and i , we did , for that dialogue and i trained it on that . and i did some pre - segmentations for jane . and i ' m not how good they are or what the transcribers say . they they can use it or ? postdoc f: , they think it 's a terrific improvement . and , it real it just makes a world of difference . and , y you also did some something in addition which was , for those in which there { nonvocalsound } was , quiet speakers in the mix . phd c: , . that that was one thing , why i added more mixtures for the speech . so i saw that there were loud loudly speaking speakers and quietly speaking speakers . and so i did two mixtures , one for the loud speakers and one for the quiet speakers . grad a: and did you hand - label who was loud and who was quiet , or did you just ? phd c: hopefully . it 's just our old munich , loudness - based spectrum on mel scale twenty critical bands and then loudness . phd c: and four additional features , which is energy , loudness , modified loudness , and zero crossing rate . so it 's twenty - four features . postdoc f: and you also provided me with several different versions , which i compared . and so you change { nonvocalsound } parameters . what do you wanna say something about the parameters { nonvocalsound } that you change ? phd c: you can specify the minimum length of speech or and silence portions which you want . and so i did some modifications in those parameters , changing the minimum length for s for silence to have , er to have , to have more or less , silence portions in inserted . grad a: right . so this would work for , pauses and utterance boundaries and things like that . but for overlap i imagine that does n't work , that you 'll have plenty of s sections that are postdoc f: that 's true . but { nonvocalsound } it saves so much time the { nonvocalsound } transcribers just enormous , enormous savings . fantastic . professor g: that 's great . , just qu one quickly , still on the features . so you have these twenty - four features . , a lot of them are spectral features . is there a transformation , like principal components transformation ? phd c: but we saw , when we used it , f for our close - talking microphone , which , for our recognizer in munich we saw that w it 's not so necessary . it it works as f with without , a lda . professor g: ok . no , i was j curious . , i do n't think it 's a big deal for this application , postdoc f: ok . but then there 's another thing that also thilo 's involved with , which is , ok , and also da - dave gelbart . so there 's this problem of and w and so we had this meeting . th - the { nonvocalsound } also adam , before the before you went away . we , regarding the representation { nonvocalsound } of overlaps , because at present , { nonvocalsound } , because { nonvocalsound } of the limitations of th the interface we 're using , overlaps are , not being { nonvocalsound } encoded by { nonvocalsound } the transcribers in as complete { nonvocalsound } and , detailed a way as it might be , and as might be desired would be desired in the corpus ultimately . so we do n't have start and end points { nonvocalsound } at each point where there 's an overlap . we just have the { nonvocalsound } overlaps { nonvocalsound } encoded in a simple bin . , ok . so { nonvocalsound } @ the limits of the { nonvocalsound } over of the interface are such that we were at this meeting we were entertaining how we might either expand { nonvocalsound } the interface or find other tools which already do what would be useful . because what would ultimately be , ideal in my view and , i had the sense that it was consensus , is that , a thorough - going musical score notation would be { nonvocalsound } the best way to go . because { nonvocalsound } you can have multiple channels , there 's a single time - line , it 's very clear , flexible , and all those things . ok . so , , i spoke i had a meeting with dave gelbart on and he had , excellent ideas on how the interface could be modified to do this representation . but , he in the meantime you were checking into the existence of already , existing interfaces which might already have these properties . so , do you wanna say something about that ? phd c: yes . , i talked with , munich guys from ludwi - ludwig maximilians university , who do a lot of transcribing and transliterations . and they said they have , a tool they developed themselves and they ca n't give away , f it 's too error - prone , and had it 's not supported , a and but , susanne bur - burger , who is at se cmu , he wa who was formally at in munich and w and is now at with cmu , she said she has something which she uses to do eight channels , trans transliterations , eight channels simultaneously , phd c: so i ' m not if we can use it . she said she would give it to us . it would n't be a problem . and i ' ve got some manual down in my office . grad a: , maybe we should get it and if it 's good enough we 'll arrange windows machines to be available . postdoc f: i also wanted to be , i ' ve seen the this is called praat , praat , { nonvocalsound } which i means spee speech in dutch . postdoc f: but in terms { nonvocalsound } of it being { nonvocalsound } windows { nonvocalsound } versus professor g: the other thing , to keep in mind , we ' ve been very concerned to get all this rolling so that we would actually have data , professor g: but , our outside sponsor is actually gon na kick in and ultimately that path will be smoothed out . so i if we have a long - term need to do lots and lots of transcribing . we had a very quick need to get something out and we 'd like to be able to do some later because just it 's inter it 's interesting . but as far a , with any luck we 'll be able to wind down the larger project . grad a: what our decision was is that we 'll go ahead with what we have with a not very fine time scale on the overlaps . grad a: and and do what we can later to clean that up if we need to . postdoc f: and and i was just thinking that , if it were possible to bring that in , like , this week , then { nonvocalsound } when they 're encoding the overlaps { nonvocalsound } it would be for them to be able to specify when , the start points and end points of overlaps . th - they 're { nonvocalsound } making really quick progress . and , so my goal was w m my charge was to get eleven hours by the end of the month . and it 'll be i ' m clear that we 'll be able to do that . postdoc f: i sent { nonvocalsound } it to , who did i send that to ? i sent it to a list and { nonvocalsound } i sent it to { nonvocalsound } the { nonvocalsound } e to the local list . postdoc f: you saw that ? so brian did tell { nonvocalsound } me that { nonvocalsound } what you said , that , { nonvocalsound } that { nonvocalsound } our that they are making progress and that he 's going that { nonvocalsound } they 're { nonvocalsound } going he 's gon na check the f the output of the first transcription and professor g: , it 's all the difference in the world . , he 's on it now . professor g: . , it 's just saying that one of our best people is on it , who just does n't happen to be here anymore . someone else pays him . phd b: , do n't we did n't we previously decide that the ibm transcripts would have to be checked anyway and possibly augmented ? grad a: and since he 's not here , i 'll repeat it to at least modify transcriber , which , if we do n't have something else that works , that 's a pretty good way of going . grad a: and we discussed on some methods to do it . my approach originally , and i ' ve already hacked on it a little bit it was too slow because i was trying to display all the waveforms . but he pointed out that you do n't really have to . that 's a good point . that if you just display the mix waveform and then have a user interface for editing the different channels , that 's perfectly sufficient . postdoc f: , exactly . and just keep those { nonvocalsound } things separate . and and , dan ellis 's hack already allows them to be { nonvocalsound } able to display different { nonvocalsound } waveforms to clarify overlaps and things , postdoc f: , yes , but { nonvocalsound } what is that , from the transcriber 's { nonvocalsound } perspective , those { nonvocalsound } two functions are separate . and dan ellis 's hack handles the , choice { nonvocalsound } the ability to choose different waveforms from moment to moment . grad a: but only to listen to , not to look at . the waveform you 're looking at does n't change . postdoc f: , but { nonvocalsound } that 's ok , cuz they 're , they 're focused on the ear anyway . and then and then the hack to preserve the overlaps { nonvocalsound } better would be one which creates different output files for each channel , which then { nonvocalsound } would also serve liz 's request of having , a single channel , separable , cleanly , easily separable , transcript tied to a single channel , audio . postdoc f: not directly . i ' m trying to think if i could have gotten it over a list . i do n't . professor g: ok . , holidays may have interrupted things , cuz in they seem to want to get clear on standards for transcription standards and with us . grad a: and agree upon a format . though i do n't remember email on that . so was i not in the loop on that ? professor g: . , i do n't think i mailed anybody . think i told them to contact jane that , if they had a professor g: so , . maybe i 'll , ping them a little bit about it to get that straight . professor g: . so is it cuz with any luck there 'll actually be a there 'll be collections at columbia , collections at uw dan is very interested in doing some other things , grad a: , it 's important both for the notation and the machine representation to be the same . postdoc f: n there was also this , { nonvocalsound } , email from dan regarding the speech - non nonspeech segmentation thing . i if , we wanna , and dan gel - and dave gelbart is interested in pursuing the aspect { nonvocalsound } of using amplitude { nonvocalsound } as a basis for the separation . professor g: i had mentioned this a couple times before , the c the commercial devices that do , , voice , active miking , look at the amp at the energy at each of the mikes . and and you compare the energy here to some function of all of the mikes . so , by doing that , rather than setting any , absolute threshold , you actually can do pretty good , selection of who 's talking . and those systems work very , , so people use them in panel discussions and with sound reinforcement differing in , and , those if boy , the guy i knew who built them , built them like twenty years ago , so they 're it 's the techniques work pretty . postdoc f: cuz there is one thing that we do n't have right now and that is the automatic , channel identifier . that that , that would g help in terms of encoding of overlaps . the the transcribers would have less , disentangling to do if that were available . professor g: . so , you can look at some p you have to play around a little bit , to figure out what the right statistic is , professor g: but you compare each microphone to some statistic based on the overall , and we also have these we have the advantage of having distant mikes too . so that , you cou yo grad a: , although the using the close - talking would be much better . would n't it ? professor g: . i . it 'd be if i was actually working on it , i 'd sit there and play around with it , and get a feeling for it . , the , you certainly wanna use the close - talking , as a at least . i if the other would add some other helpful dimension or not . phd d: ok . what what are the different , classes to code , the overlap , you will use ? postdoc f: so types of overlap ? , so { nonvocalsound } at a meeting that was n't transcribed , we worked up a typology . and , postdoc f: yes , exactly . that has n't changed . so it { nonvocalsound } i the it 's a two - tiered structure where the first one is whether { nonvocalsound } the person who 's interrupted continues or not . and then below that there 're { nonvocalsound } subcategories , that have more to do with , { nonvocalsound } , is it , simply { nonvocalsound } backchannel or is { nonvocalsound } it , someone completing someone else 's thought , or is it someone in introducing a new thought . grad a: and i hope that if we do a forced alignment with the close - talking mike , that will be enough to recover at least some of the time information of when the overlap occurred . phd b: so who 's gon na do that ? who 's gon na do forced alignment ? grad a: and i imagine they still plan to but , i have n't spoken with them about that recently . postdoc f: this is wonderful { nonvocalsound } to have a direct contact like that . , th lemme ask { nonvocalsound } you this . it occurs to me one of my transcribers t { nonvocalsound } told { nonvocalsound } me today that she 'll { nonvocalsound } be finished with one meeting , by , she said tomorrow but then she said { nonvocalsound } , but { nonvocalsound } the , let 's just , say maybe the day after just to be s on the safe side . i could send brian the , { nonvocalsound } the { nonvocalsound } transcript . i know these { nonvocalsound } are er , i could send him that { nonvocalsound } if { nonvocalsound } it would be possible , { nonvocalsound } or a good idea or not , to { nonvocalsound } try { nonvocalsound } to do a s forced alignment on what we 're on the way we 're encoding overlaps now . professor g: , just talk to him about it . , he 's he just studies , he 's a colleague , a friend , and , professor g: it was just a question of getting , the right people connected in , who had the time . grad a: is he on the mailing list ? the meeting recorder mailing li ? we should add him . phd e: did something happen , morgan , that he got put on this , or was he already on it , professor g: no , i , p it it oc i h it 's , something happened . i what . postdoc f: that would be { nonvocalsound } like that 'd be like him . he 's great . professor g: so , where are we ? maybe , brief , let 's why do n't we talk about microphone issues ? grad a: , so one thing is that i did look on sony 's for a replacement for the mikes for the head m head - worn ones cuz they 're so uncomfortable . but i need someone who knows more about mikes than i do , because i could n't find a single other model that seemed like it would fit the connector , which seems really unlikely to me . does anyone , like , know stores or know about mikes who would know the right questions to ask ? professor g: , i probably would . , my knowledge is twenty years out of date but some of it 's still the same . phd e: you could n't you could n't find the right connector to go into these things ? grad a: as having that type of connector . but my is that sony maybe uses a different number for their connector than everyone else does . and and so grad a: i have it downstairs . i do n't remember off the top of my head . grad a: and then , just in terms of how you wear them , i had thought about this before . , when you use a product like dragondictate , they have a very extensive description about how to wear the microphone and so on . but i felt that in a real situation we were very seldom gon na get people to really do it and maybe it was n't worth concentrating on . professor g: , that 's a good back - off position . that 's what i was saying earlier , th that , we are gon na get some recordings that are imperfect and , hey , that 's life . but that it does n't hurt , the naturalness of the situation to try to have people wear the microphones properly , if possible , because , the natural situation is really what we have with the microphones on the table . professor g: , , in the target applications that we 're talking about , people are n't gon na be wearing head - mounted mikes anyway . so this is just for u these head - mounted mikes are just for use with research . and , it 's gon na make , if an - andreas plays around with language modeling , he 's not gon na be m wanna be messed up by people breathing into the microphone . so it 's , grad a: , i 'll dig through the documentation to dragondictate and ste s see if they still have the little form . phd b: it 's interesting , i talked to some ibm guys , last january , i was there . and so people who were working on the on their viavoice dictation product . and they said , the breathing is really a terrible problem for them , to not recognize breathing as speech . so , anything to reduce breathing is a good thing . grad a: , that 's the it seemed to me when i was using dragon that it was really microphone placement helped an in , an enormous amount . so you want it enough to the side so that when you exhale through your nose , it does n't the wind does n't hit the mike . grad a: everyone 's adjusting their microphones , . and then just close enough so that you get good volume . so , wearing it right about here seems to be about the right way to do it . professor g: i remember when i was when i used , , a prominent laboratory 's , speech recognizer about , this was , boy , this was a while ago , this was about twelve years ago . and , they were perturbed with me because i was breathing in instead of breathing out . and they had models for they had markov models for br breathing out but they did n't have them for breathing in . postdoc f: that 's interesting . , what i wondered is whether it 's possible to have to maybe use the display at the beginning to be able to judge how correctly , have someone do some routine whatever , and then see if when they 're breathing it 's showing . grad a: and so , i ' ve sat here and watched sometimes the breathing , and the bar going up and down , and i ' m thinking , i could say something , but i do n't want to make people self - conscious . stop breathing ! professor g: you 're not gon na get it perfect . and you can do some , , first - order thing about it , which is to have people move it , a away from being just directly in front of the middle but not too far away . professor g: and then , there 's not much because you ca n't al , interfere w you ca n't fine tune the meeting that much , . it 's postdoc f: it just seems like i if something l simple like that can be tweaked and the quality goes , , dramatically up , then it might be worth doing . grad a: and then also the position of the mike also . if it 's more directly , you 'll get better volume . so so , like , yours is pretty far down below your mouth . postdoc f: my my feedback from the transcribers is he is always close to crystal clear and just fan fantastic to postdoc f: , you , . you 're you 're also , your volume is greater . but but still , they say phd b: one more remark , concerning the sri recognizer . it is useful to transcribe and then ultimately train models for things like breath , and also laughter is very , very frequent and important to model . so , postdoc f: they are . they 're putting , so in curly brackets they put " inhale " or " breath " . postdoc f: it they and then in curly brackets they say " laughter " . now they 're not being awfully precise , m so they 're two types of laughter that are not being distinguished . one is when sometimes s someone will start laughing when they 're in the middle of a sentence . and and then the other one is when they finish the sentence and then they laugh . so , i did s i did some double checking to look through , you 'd need to have extra e extra complications , like time tags indicating the beginning and ending of the laughing through the utterance . postdoc f: and that and what they 're doing is in both cases just saying " curly brackets laughing " a after the unit . phd b: as as long as there is an indication that there was laughter somewhere between two words that 's sufficient , phd b: because actually the recognition of laughter once you kn , is pretty good . so as long as you can stick a , a t a tag in there that indicates that there was laughter , postdoc f: then and let me ask y and i got ta ask you one thing about that . postdoc f: so , if they laugh between two words , you 'd get it in between the two words . but if they laugh across three or four words you get it after those four words . does that matter ? phd b: , the thing that you is hard to deal with is whe when they speak while laughing . , and that 's , i do n't think that we can do very with that . so but , that 's not as frequent as just laughing between speaking , grad a: so are do you treat breath and laughter as phonetically , or as word models , or what ? phd b: we tried both . , currently , we use special words . there was a there 's actually a word for , it 's not just breathing but all kinds of mouth phd b: same thing ? . you ha . and each of these words has a dedicated phone . phd b: so the so the mouth noise , word has just a single phone , that is for that . grad a: right . so in the hybrid system we could train the net with a laughter phone and a breath sound phone . professor g: , it 's always the same thing . , you could say , let we now think that laughter should have three sub - units in the three states , different states . and then you would have three , , it 's u phd b: they actually are , it 's just a single , s , a single phone in the pronunciation , but it has a self - loop on it , so it can phd b: we train it like any other word . we also tried , absorbing these , both laughter and actually also noise , and , yes . anyway . we also tried absorbing that into the pause model , the model that matches the between words . and , it did n't work as . so . grad a: ok . can you hand me your digit form ? wanna mark that you did not read digits . postdoc f: you you did get me to thinking about i ' m not really which is more frequent , whether f laughing it may be an individual thing . some people are more prone to laughing when they 're speaking . grad a: i was noticing that with dan in the one that we , we hand tran hand - segmented , professor g: i ' m it 's very individual . and and one thing that c that we 're not doing , is we 're not claiming to , get be getting a representation of mankind in these recordings . we have this very , very tiny sample of professor g: so , who knows . why don why do n't we just since we 're on this vein , why do n't we just continue with , what you were gon na say about the transcriptions and ? postdoc f: , the i ' m really very for i ' m extremely fortunate with the people who , applied and who are transcribing for us . they are , really perceptive and very , and i ' m not just saying that cuz they might be hearing this . postdoc f: , i know . i am i ' m serious . they 're just super . so i , e , i brought them in and , trained them in pairs because people can raise questions postdoc f: , i the they think about different things and they think of different and , i trained them to , f on about a minute or two of the one that was already transcribed . this also gives me a sense of , i can use that later , with reference to inter - coder reliability issues . but the main thing was to get them used to the conventions and , the idea of the th the size of the unit versus how long it takes to play it back so these th calibration issues . and then , set them loose and they 're they all have e a already background in using computers . they 're , they 're trained in linguistics . postdoc f: they got , they 're very perce they 'll so one of them said " , he really said " n " , not really " and " , so what should i do with that ? " and i said , " for our purposes , i do have a convention . if it 's an a noncanonical p " that one , we , with eric 's work , i figure we can just treat that as a variant . but i told them if there 's an obvious speech error , like i said in one thing , and i gave my example , like i said , " microfon " in instead of " microphone " . did n't bother i knew it when i said it . i remember s thinking " , that 's not correctly pronounced " . but it but it 's not worth fixing cuz often when you 're speaking everybody knows what you mean . postdoc f: but i have a convention that if it 's a noncanonical pronunciation a speech error with , wi within the realm of resolution that you can tell in this native english american english speaker , that i did n't mean to say " microfon . " then you 'd put a little tick at the beginning of the word , and that just signals that , this is not standard , and then in curly brackets " pron { nonvocalsound } error " . and , and other than that , it 's w word level . but , the fact that they noticed , the " nnn " . " he said " nnn " , not " and " . what shall i do with that ? " , they 're very perceptive . and and s several of them are trained in ipa . c they really could do phonetic transcription if we wanted them to . professor g: right . , it might be something we 'd wanna do with some , s small subset of the whole thing . postdoc f: and i ' m also thinking these people are a terrific pool . , if , so i told them that , we if this will continue past the end of the month and i also m i think they know that the data p source is limited and i may not be able to keep them employed till the end of the month even , although i hope to . professor g: the other thing we could do , actually , is , use them for a more detailed analysis of the overlaps . postdoc f: and , that 'd be so super . they would be so s so terrific . grad a: , this was something that we were talking about . we could get a very detailed overlap if they were willing to transcribe each meeting four or five times . right ? one for each participant . so they could by hand professor g: , that 's one way to do it . but i ' ve been saying the other thing is just go through it for the overlaps . professor g: given that y and do so instead of doing phonetic , transcription for the whole thing , which we know from the steve 's experience with the switchboard transcription is , very time - consuming . and and , it took them i how many months to do to get four hours . and so that has n't been really our focus . , we can consider it . but , the other thing is since we ' ve been spending so much time thinking about overlaps is maybe get a much more detailed analysis of the overlaps . but anyway , i ' m open to c our consideration . professor g: i do n't wanna say that by fiat . i ' m open to every consideration of what are some other kinds of detailed analysis that would be most useful . i this year we actually , can do it . professor g: it 's a we have due to @ variations in funding we have we seem to be doing , very on m money for this year , and next year we may have much less . professor g: , calendar year two thousand one . so it 's , it 's we do n't wanna hire a bunch of people , a long - term staff , professor g: because the funding that we ' ve gotten is a big chunk for this year . but having temporary people doing some specific thing that we need is actually a perfect match to that , funding . postdoc f: wonderful . and then school will start in the sixt on the sixteenth . some of them will have to cut back their hours at that point . postdoc f: but { nonvocalsound } some of them are . , why do i would n't say forty - hour weeks . no . but what is , i should n't say it that way because { nonvocalsound } that does sound like forty - hour weeks . i th i would say they 're probably { nonvocalsound } they do n't have o they do n't have other things that are taking away their time . postdoc f: no . you 're right . it 's i it would be too taxing . but , they 're putting { nonvocalsound } in a lot of and and i checked them over . i have n't checked them all , but just spot - checking . they 're fantastic . professor g: i remember when we were transcribing berp , , ron , volunteered to do some of that . and , he was the first he did was transcribing chuck . and he 's saying " you , i always thought chuck spoke really . " postdoc f: , and i also thought , y liz has this , , and i do also , this interest in the types of overlaps that are involved . these people would be { nonvocalsound } great choices for doing coding of that type if we wanted , grad a: it would also be interesting to have , a couple of the meetings have more than one transcriber do , cuz i ' m curious about inter - annotator agreement . postdoc f: th - that 'd be that 's a good idea . , there 's also , the e in my mind , a an - andreas was leading to this topic , the idea that , we have n't yet seen the type of transcript that we get from ibm , and it may just be , pristine . but on the other hand , given the lesser interface cuz this is , we ' ve got a good interface , we ' ve got great headphones , m professor g: it could be that they will theirs will end up being a fir first pass . professor g: maybe an elaborate one , cuz again they probably are gon na do these alignments , which will also clear things up . postdoc f: that 's that 's true . al - although you have to s do n't you have to start with a close enough approximation { nonvocalsound } of the verbal part { nonvocalsound } to be able to ? professor g: , tha that 's debatable . , so the argument is that if your statistical system is good it will , clean things up . so it 's got its own objective criterion . and , so in principle you could start up with something that was rough , to give an example of , something we used to do , at one point , back when chuck was here in early times , is we would take , da take a word and , have a canonical pronunciation and , if there was five phones in a word , you 'd break up the word , into five equal - length pieces which is completely gross . professor g: , th the timing is off all over the place in just about any word . but it 's o k . you start off with that and the statistical system then aligns things , and eventually you get something that does n't really look too bad . professor g: so so using a good aligner , actually can help a lot . but , , they both help each other . if you have a if you have a better starting point , then it helps the aligner . if you have a good alignment , it helps the , th the human in taking less time to correct things . so so postdoc f: excellent . i there 's another aspect , too , and i , this is very possibly a different , topic . but , { nonvocalsound } , just let me say with reference to this idea of , higher - order organization within meetings . so like in a , the topics that are covered during a meeting with reference to the other , uses of the data , so being able to find where so - and - so talked about such - and - such , then , e , i did a rough pass { nonvocalsound } on encoding , like , episode - like level things on the , transcribed meeting already transcribed meeting . and i if , where { nonvocalsound } that i if that 's something that we wanna do with each meeting , like a , it 's like a manifest , when you get a box full of , or if that 's , i i what , level of detail would be most useful . i i if that 's something that i should do when i look over it , or if we want someone else to do , or whatever . but this issue of the contents of the meeting in an outline form . grad a: it just whoever is interested can do that . , so if someone wants to use that data professor g: , was p , the thing i ' m concerned about is we wanted to do these digits and i have n't heard , from jose yet . grad a: we could skip the digits . we do n't have to read digits each time . professor g: so so i 'd like to do that . but , do you , maybe , ? did you prepare some whole thing you wanted us just to see ? phd d: it 's fast , because , i have the results , of the study of different energy without the law length . , , in the measurement , the average , dividing by the , variance . , i th i the other , the last w , meeting , i if you remain we have problem to with the parameter with the representations of parameter , because the valleys and the peaks in the signal , look like , it does n't follow to the energy in the signal . and it was a problem , with the scale . professor g: so that 's that 's enough then . no , that there 's no point in going through all of that if that 's the bottom line , really . so , we have to start , there 's two suggestions , really , which is , what we said before is that , professor g: it looks like , at least that you have n't found an obvious way to normalize so that the energy is anything like a reliable , indicator of the overlap . , i ' m still a little f think that 's a little funny . these things l @ seems like there should be , but you do n't want to keep , keep knocking at it if it 's if you 're not getting any result with that . but , the other things that we talked about is , pitch - related things and harmonicity - related things , so which we thought also should be some a reasonable indicator . but , a completely different tack on it wou is the one that was suggested , by your colleagues in spain , which is to say , do n't worry so much about the , features . that is to say , use , as you 're doing with the speech , nonspeech , use some very general features . and , then , look at it more from the aspect of modeling . , have a couple markov models and , try to indi try to determine , w when is th when are you in an overlap , when are you not in an overlap . and let the , statistical system determine what 's the right way to look at the data . i , it would be interesting to find individual features and put them together . that you 'd end up with a better system overall . but given the limitation in time and given the fact that javier 's system already exists doing this thing , but , its main limitation is that , again , it 's only looking at silences which would maybe that 's a better place to go . phd d: i that , the possibility , can be that , thilo , working , with a new class , not only , nonspeech and speech , but , in the speech class , dividing , speech , of from a speaker and overlapping , to try to do , a fast , experiment to prove that , nnn , this fea , general feature , can solve the problem , and wh what nnn , how far is phd d: and , i have prepared the pitch tracker now . and i hope the next week i will have , some results and we will show we will see , the parameter the pitch , tracking in with the program . professor g: ha - h have you ever looked at the , javier 's , speech segmenter ? . maybe m you could , you kn show thilo that . cuz again the idea is there the limitation there again was that he was only using it to look at silence as a as a p putative split point between speakers . but if you included , broadened classes then in principle maybe you can cover the overlap cases . phd c: , but i ' m not too if we can really represent overlap with the s detector i used up to now , grad a: it does n't have the same gaus - , h m modeling , which is a drawback . but , phd d: javier you mean ja - , javier program ? no , javier di does n't worked with , a markov grad a: it 's just , that i it he has the two - pass issue that what he does is , as a first pass he p he does , a at where the divisions might be and he overestimates . and that 's just a data reduction step , so that you 're not trying at every time interval . and so those are the putative places where he tries . and right now he 's doing that with silence and that does n't work with the meeting recorder . so if we used another method to get the first pass , it would probably work . grad a: it 's a good method . as long as the len as long the segments are long enough . that 's the other problem . professor g: o - k ok . so let me go back to what you had , though . the other thing one could do is could n't , it 's so you have two categories and you have markov models for each . could n't you have a third category ? so you have , nonspeech , single - person speech , and multiple - person speech ? postdoc f: he has this on his board actually . do n't you have , like those several different categories on the board ? phd c: i ' m not . about , adding , another class too . but it 's not too easy , the transition between the different class , to model them in the system i have now . but it could be possible , in principle . professor g: , i this is all pretty gross . , the th the reason why , i was suggesting originally that we look at features is because , we 're doing something we have n't done before , we should at least look at the space and understand it seems like if two people two or more people talk at once , it should get louder , and , there should be some discontinuity in pitch contours , professor g: and , there should overall be a , smaller proportion of the total energy that is explained by any particular harmonic sequence in the spectrum . so those are all things that should be there . so far , , jose has been , i was told i should be calling you pepe , but by your friends , anyway , the has , been exploring , e largely the energy issue as with a lot of things , it is not , like this , it 's not as simple as it sounds . and then there 's , is it energy ? is it log energy ? is it lpc residual energy ? is it is it , delta of those things ? , what is it no , just a simple number absolute number is n't gon na work . so it should be with compared to what ? should there be a long window for the normalizing factor and a short window for what you 're looking at ? or , how b short should they be ? th he 's been playing around with a lot of these different things and so far at least has not come up with any combination that really gave you an indicator . i still have a hunch that there 's it 's in there some place , but it may be given that you have a limited time here , it just may not be the best thing to focus on for the remaining of it . professor g: so pitch - related and harmonic - related , i ' m somewhat more hopeful for it . but it seems like if we just wanna get something to work , that , their suggestion of th - they were suggesting going to markov models , but in addition there 's an expansion of what javier did . and one of those things , looking at the statistical component , professor g: even if the features that you give it are maybe not ideal for it , it 's just this general filter bank or cepstrum , eee it 's in there somewhere probably . phd d: but , what did you think about the possibility of using the javier software ? , the bic criterion , the t to train the gaussian , using the mark , by hand , to distinguish be mmm , to train overlapping zone and speech zone . , i that an interesting , experiment , could be , th , to prove that , mmm , if s we suppose that , the first step , the classifier what were the classifier from javier or classifier from thilo ? w what happen with the second step ? , what happen with the , clu the , the clu the clustering process ? using the gaussian . phd d: , that is enough , to work , to , separate or to distinguish , between overlapping zone and , speaker zone ? because th if we , nnn , develop an classifier and the second step does n't work , we have another problem . grad a: i . i had tried doing it by hand at one point with a very short sample , grad a: and it worked pretty , but i have n't worked with it a lot . so what i d i took a hand - segmented sample and i added ten times the amount of numbers at random , and it did pick out pretty good boundaries . phd d: but it 's possible with my segmentation by hand that we have information about the overlapping , grad a: right . so if we fed the hand - segmentation to javier 's and it does n't work , then we know something 's wrong . phd d: the n n . the demonstration by hand . segmentation by hand i is the fast experiment .
the berkeley meeting recorder group talked about the ongoing transcription effort and issues related to the transcriber tool , which despite its limitations for capturing tight time markings for overlapping speech , will continue to remain in use. speaker mn014 explained his efforts to pre-segment the signal into speech and non-speech portions for facilitating transcriptions. recording equipment and procedures were discussed , with a focus on audible breathing and the need for standards in microphone wear and use. and , finally , it was determined that speaker mn005's efforts to detect speaker overlap using energy should instead be focussed on pitch- and harmonicity-related features or be guided by a non-featural , statistical approach , i.e . via the use of markov models. in the interest of time , it was decided that the group should continue using the existing transcriber tool and perform a forced alignment on the close-talking microphones that will , it is hoped , help to recover some of the time information indicating where different speaker overlaps occurred in the signal. a meeting will be arranged with nist to decide on a common standard and format for doing transcriptions. one or two meetings will be assigned to multiple transcribers to check for inter-annotator agreement. to cut down on audible breaths during recordings , the group will institute some level of standards for microphone wear and use. speaker mn005 will feed his hand-segmented data into the speech segmenter developed by javier to train it to identify different types of speech ( i.e . that of single versus multiple speakers ) , as well as focussing on pitch- and harmonicity-related features for identifying overlapping speech. there is no channel identifier to help in encoding speaker overlaps. speech uttered while laughing is problematic for asr. so far , speaker mn005's attempts to detect speaker overlap have been unsuccessful , as it has not been possible to normalize energy as a reliable indicator of overlap. speaker mn014's efforts to detect speech/non-speech portions in the mixed signal ( using an hmm-based detector with gaussian mixtures ) have produced pre-segmentations that facilitate the transcription effort. speaker mn014 also trained the system to identify speech from loud versus quiet speakers. such pre-segmentation modifications allow the experimenter to specify the minimum length of speech and silence portions desired , and also facilitate the identification of pauses and utterance boundaries. the transcriber pool is making quick progress , and may be used in the future to perform other types of coding , e.g . a more detailed analysis of speaker overlap. transcribers are coding non-speech gestures , such as audible breaths and laughter , both of which are useful for improving recognition results. recent modifications to the transcriber tool allow transcribers to listen to speech from different channels , as well as helping to preserve portions of overlapping speech , and enabling the creation of different output files for each channel for a cleaner and more segmentable transcript. the praat software package was discussed as an alternative transcription tool capable of representing multiple channels of speech. cross-correlation was discussed as a means of enabling speaker identification , and may be integrated into future work.
###dialogue: grad a: not pre - doing everything . the lunch went a little later than i was expecting , chuck . postdoc f: , i ' m i sent a couple of items . they 're they 're practical . postdoc f: , maybe { nonvocalsound } raise the issue of microphone , procedures with reference to the cleanliness of the recordings . postdoc f: and then maybe { nonvocalsound } ask , th , these guys . the we have great , p steps forward in terms of the nonspeech - speech pre - segmenting of the signal . phd b: since , since i have to leave as usual at three - thirty , can we do the interesting first ? professor g: , and , the other thing , which i 'll just say very briefly that maybe relates to that a little bit , which is that , , one of the suggestions that came up in a brief meeting i had the other day when i was in spain with , manolo pardo and javier , ferreiros , who was here before , was , why not start with what they had before but add in the non - silence boundaries . so , in what javier did before when they were doing , h he was looking for , speaker change points . professor g: as a simplification , he originally did this only using silence as , a putative , speaker change point . and , he did not , say , look at points where you were changing broad sp , phonetic class , . and for broadcast news , that was fine . here it 's not . and , so one of the things that they were pushing in d in discussing with me is , w why are you spending so much time , on the , feature issue , when perhaps if you deal with what you were using before and then just broadened it a bit , instead of just ta using silence as putative change point also ? professor g: so then you ' ve got you already have the super - structure with gaussians and h - , simple h m ms and . and you might so there was a little bit of a difference of opinion because that it was it 's interesting to look at what features are useful . but , on the other hand i saw that the they had a good point that , if we had something that worked for many cases before , maybe starting from there a little bit because ultimately we 're gon na end up with some s su structure like that , where you have some simple hmm and you 're testing the hypothesis that , there is a change . so so anyway , reporting that . professor g: but , so . , why do n't we do the speech - nonspeech discussion ? phd c: , so , what we did so far was using the mixed file to detect s speech or nonspeech portions in that . phd c: and what i did so far is used our old munich system , which is an hmm - ba based system with gaussian mixtures for s speech and nonspeech . and it was a system which used only one gaussian for silence and one gaussian for speech . and now i added , multi - mixture possibility for speech and nonspeech . and i did some training on one dialogue , which was transcribed by . we we did a nons s speech - nonspeech transcription . phd c: adam , dave , and i , we did , for that dialogue and i trained it on that . and i did some pre - segmentations for jane . and i ' m not how good they are or what the transcribers say . they they can use it or ? postdoc f: , they think it 's a terrific improvement . and , it real it just makes a world of difference . and , y you also did some something in addition which was , for those in which there { nonvocalsound } was , quiet speakers in the mix . phd c: , . that that was one thing , why i added more mixtures for the speech . so i saw that there were loud loudly speaking speakers and quietly speaking speakers . and so i did two mixtures , one for the loud speakers and one for the quiet speakers . grad a: and did you hand - label who was loud and who was quiet , or did you just ? phd c: hopefully . it 's just our old munich , loudness - based spectrum on mel scale twenty critical bands and then loudness . phd c: and four additional features , which is energy , loudness , modified loudness , and zero crossing rate . so it 's twenty - four features . postdoc f: and you also provided me with several different versions , which i compared . and so you change { nonvocalsound } parameters . what do you wanna say something about the parameters { nonvocalsound } that you change ? phd c: you can specify the minimum length of speech or and silence portions which you want . and so i did some modifications in those parameters , changing the minimum length for s for silence to have , er to have , to have more or less , silence portions in inserted . grad a: right . so this would work for , pauses and utterance boundaries and things like that . but for overlap i imagine that does n't work , that you 'll have plenty of s sections that are postdoc f: that 's true . but { nonvocalsound } it saves so much time the { nonvocalsound } transcribers just enormous , enormous savings . fantastic . professor g: that 's great . , just qu one quickly , still on the features . so you have these twenty - four features . , a lot of them are spectral features . is there a transformation , like principal components transformation ? phd c: but we saw , when we used it , f for our close - talking microphone , which , for our recognizer in munich we saw that w it 's not so necessary . it it works as f with without , a lda . professor g: ok . no , i was j curious . , i do n't think it 's a big deal for this application , postdoc f: ok . but then there 's another thing that also thilo 's involved with , which is , ok , and also da - dave gelbart . so there 's this problem of and w and so we had this meeting . th - the { nonvocalsound } also adam , before the before you went away . we , regarding the representation { nonvocalsound } of overlaps , because at present , { nonvocalsound } , because { nonvocalsound } of the limitations of th the interface we 're using , overlaps are , not being { nonvocalsound } encoded by { nonvocalsound } the transcribers in as complete { nonvocalsound } and , detailed a way as it might be , and as might be desired would be desired in the corpus ultimately . so we do n't have start and end points { nonvocalsound } at each point where there 's an overlap . we just have the { nonvocalsound } overlaps { nonvocalsound } encoded in a simple bin . , ok . so { nonvocalsound } @ the limits of the { nonvocalsound } over of the interface are such that we were at this meeting we were entertaining how we might either expand { nonvocalsound } the interface or find other tools which already do what would be useful . because what would ultimately be , ideal in my view and , i had the sense that it was consensus , is that , a thorough - going musical score notation would be { nonvocalsound } the best way to go . because { nonvocalsound } you can have multiple channels , there 's a single time - line , it 's very clear , flexible , and all those things . ok . so , , i spoke i had a meeting with dave gelbart on and he had , excellent ideas on how the interface could be modified to do this representation . but , he in the meantime you were checking into the existence of already , existing interfaces which might already have these properties . so , do you wanna say something about that ? phd c: yes . , i talked with , munich guys from ludwi - ludwig maximilians university , who do a lot of transcribing and transliterations . and they said they have , a tool they developed themselves and they ca n't give away , f it 's too error - prone , and had it 's not supported , a and but , susanne bur - burger , who is at se cmu , he wa who was formally at in munich and w and is now at with cmu , she said she has something which she uses to do eight channels , trans transliterations , eight channels simultaneously , phd c: so i ' m not if we can use it . she said she would give it to us . it would n't be a problem . and i ' ve got some manual down in my office . grad a: , maybe we should get it and if it 's good enough we 'll arrange windows machines to be available . postdoc f: i also wanted to be , i ' ve seen the this is called praat , praat , { nonvocalsound } which i means spee speech in dutch . postdoc f: but in terms { nonvocalsound } of it being { nonvocalsound } windows { nonvocalsound } versus professor g: the other thing , to keep in mind , we ' ve been very concerned to get all this rolling so that we would actually have data , professor g: but , our outside sponsor is actually gon na kick in and ultimately that path will be smoothed out . so i if we have a long - term need to do lots and lots of transcribing . we had a very quick need to get something out and we 'd like to be able to do some later because just it 's inter it 's interesting . but as far a , with any luck we 'll be able to wind down the larger project . grad a: what our decision was is that we 'll go ahead with what we have with a not very fine time scale on the overlaps . grad a: and and do what we can later to clean that up if we need to . postdoc f: and and i was just thinking that , if it were possible to bring that in , like , this week , then { nonvocalsound } when they 're encoding the overlaps { nonvocalsound } it would be for them to be able to specify when , the start points and end points of overlaps . th - they 're { nonvocalsound } making really quick progress . and , so my goal was w m my charge was to get eleven hours by the end of the month . and it 'll be i ' m clear that we 'll be able to do that . postdoc f: i sent { nonvocalsound } it to , who did i send that to ? i sent it to a list and { nonvocalsound } i sent it to { nonvocalsound } the { nonvocalsound } e to the local list . postdoc f: you saw that ? so brian did tell { nonvocalsound } me that { nonvocalsound } what you said , that , { nonvocalsound } that { nonvocalsound } our that they are making progress and that he 's going that { nonvocalsound } they 're { nonvocalsound } going he 's gon na check the f the output of the first transcription and professor g: , it 's all the difference in the world . , he 's on it now . professor g: . , it 's just saying that one of our best people is on it , who just does n't happen to be here anymore . someone else pays him . phd b: , do n't we did n't we previously decide that the ibm transcripts would have to be checked anyway and possibly augmented ? grad a: and since he 's not here , i 'll repeat it to at least modify transcriber , which , if we do n't have something else that works , that 's a pretty good way of going . grad a: and we discussed on some methods to do it . my approach originally , and i ' ve already hacked on it a little bit it was too slow because i was trying to display all the waveforms . but he pointed out that you do n't really have to . that 's a good point . that if you just display the mix waveform and then have a user interface for editing the different channels , that 's perfectly sufficient . postdoc f: , exactly . and just keep those { nonvocalsound } things separate . and and , dan ellis 's hack already allows them to be { nonvocalsound } able to display different { nonvocalsound } waveforms to clarify overlaps and things , postdoc f: , yes , but { nonvocalsound } what is that , from the transcriber 's { nonvocalsound } perspective , those { nonvocalsound } two functions are separate . and dan ellis 's hack handles the , choice { nonvocalsound } the ability to choose different waveforms from moment to moment . grad a: but only to listen to , not to look at . the waveform you 're looking at does n't change . postdoc f: , but { nonvocalsound } that 's ok , cuz they 're , they 're focused on the ear anyway . and then and then the hack to preserve the overlaps { nonvocalsound } better would be one which creates different output files for each channel , which then { nonvocalsound } would also serve liz 's request of having , a single channel , separable , cleanly , easily separable , transcript tied to a single channel , audio . postdoc f: not directly . i ' m trying to think if i could have gotten it over a list . i do n't . professor g: ok . , holidays may have interrupted things , cuz in they seem to want to get clear on standards for transcription standards and with us . grad a: and agree upon a format . though i do n't remember email on that . so was i not in the loop on that ? professor g: . , i do n't think i mailed anybody . think i told them to contact jane that , if they had a professor g: so , . maybe i 'll , ping them a little bit about it to get that straight . professor g: . so is it cuz with any luck there 'll actually be a there 'll be collections at columbia , collections at uw dan is very interested in doing some other things , grad a: , it 's important both for the notation and the machine representation to be the same . postdoc f: n there was also this , { nonvocalsound } , email from dan regarding the speech - non nonspeech segmentation thing . i if , we wanna , and dan gel - and dave gelbart is interested in pursuing the aspect { nonvocalsound } of using amplitude { nonvocalsound } as a basis for the separation . professor g: i had mentioned this a couple times before , the c the commercial devices that do , , voice , active miking , look at the amp at the energy at each of the mikes . and and you compare the energy here to some function of all of the mikes . so , by doing that , rather than setting any , absolute threshold , you actually can do pretty good , selection of who 's talking . and those systems work very , , so people use them in panel discussions and with sound reinforcement differing in , and , those if boy , the guy i knew who built them , built them like twenty years ago , so they 're it 's the techniques work pretty . postdoc f: cuz there is one thing that we do n't have right now and that is the automatic , channel identifier . that that , that would g help in terms of encoding of overlaps . the the transcribers would have less , disentangling to do if that were available . professor g: . so , you can look at some p you have to play around a little bit , to figure out what the right statistic is , professor g: but you compare each microphone to some statistic based on the overall , and we also have these we have the advantage of having distant mikes too . so that , you cou yo grad a: , although the using the close - talking would be much better . would n't it ? professor g: . i . it 'd be if i was actually working on it , i 'd sit there and play around with it , and get a feeling for it . , the , you certainly wanna use the close - talking , as a at least . i if the other would add some other helpful dimension or not . phd d: ok . what what are the different , classes to code , the overlap , you will use ? postdoc f: so types of overlap ? , so { nonvocalsound } at a meeting that was n't transcribed , we worked up a typology . and , postdoc f: yes , exactly . that has n't changed . so it { nonvocalsound } i the it 's a two - tiered structure where the first one is whether { nonvocalsound } the person who 's interrupted continues or not . and then below that there 're { nonvocalsound } subcategories , that have more to do with , { nonvocalsound } , is it , simply { nonvocalsound } backchannel or is { nonvocalsound } it , someone completing someone else 's thought , or is it someone in introducing a new thought . grad a: and i hope that if we do a forced alignment with the close - talking mike , that will be enough to recover at least some of the time information of when the overlap occurred . phd b: so who 's gon na do that ? who 's gon na do forced alignment ? grad a: and i imagine they still plan to but , i have n't spoken with them about that recently . postdoc f: this is wonderful { nonvocalsound } to have a direct contact like that . , th lemme ask { nonvocalsound } you this . it occurs to me one of my transcribers t { nonvocalsound } told { nonvocalsound } me today that she 'll { nonvocalsound } be finished with one meeting , by , she said tomorrow but then she said { nonvocalsound } , but { nonvocalsound } the , let 's just , say maybe the day after just to be s on the safe side . i could send brian the , { nonvocalsound } the { nonvocalsound } transcript . i know these { nonvocalsound } are er , i could send him that { nonvocalsound } if { nonvocalsound } it would be possible , { nonvocalsound } or a good idea or not , to { nonvocalsound } try { nonvocalsound } to do a s forced alignment on what we 're on the way we 're encoding overlaps now . professor g: , just talk to him about it . , he 's he just studies , he 's a colleague , a friend , and , professor g: it was just a question of getting , the right people connected in , who had the time . grad a: is he on the mailing list ? the meeting recorder mailing li ? we should add him . phd e: did something happen , morgan , that he got put on this , or was he already on it , professor g: no , i , p it it oc i h it 's , something happened . i what . postdoc f: that would be { nonvocalsound } like that 'd be like him . he 's great . professor g: so , where are we ? maybe , brief , let 's why do n't we talk about microphone issues ? grad a: , so one thing is that i did look on sony 's for a replacement for the mikes for the head m head - worn ones cuz they 're so uncomfortable . but i need someone who knows more about mikes than i do , because i could n't find a single other model that seemed like it would fit the connector , which seems really unlikely to me . does anyone , like , know stores or know about mikes who would know the right questions to ask ? professor g: , i probably would . , my knowledge is twenty years out of date but some of it 's still the same . phd e: you could n't you could n't find the right connector to go into these things ? grad a: as having that type of connector . but my is that sony maybe uses a different number for their connector than everyone else does . and and so grad a: i have it downstairs . i do n't remember off the top of my head . grad a: and then , just in terms of how you wear them , i had thought about this before . , when you use a product like dragondictate , they have a very extensive description about how to wear the microphone and so on . but i felt that in a real situation we were very seldom gon na get people to really do it and maybe it was n't worth concentrating on . professor g: , that 's a good back - off position . that 's what i was saying earlier , th that , we are gon na get some recordings that are imperfect and , hey , that 's life . but that it does n't hurt , the naturalness of the situation to try to have people wear the microphones properly , if possible , because , the natural situation is really what we have with the microphones on the table . professor g: , , in the target applications that we 're talking about , people are n't gon na be wearing head - mounted mikes anyway . so this is just for u these head - mounted mikes are just for use with research . and , it 's gon na make , if an - andreas plays around with language modeling , he 's not gon na be m wanna be messed up by people breathing into the microphone . so it 's , grad a: , i 'll dig through the documentation to dragondictate and ste s see if they still have the little form . phd b: it 's interesting , i talked to some ibm guys , last january , i was there . and so people who were working on the on their viavoice dictation product . and they said , the breathing is really a terrible problem for them , to not recognize breathing as speech . so , anything to reduce breathing is a good thing . grad a: , that 's the it seemed to me when i was using dragon that it was really microphone placement helped an in , an enormous amount . so you want it enough to the side so that when you exhale through your nose , it does n't the wind does n't hit the mike . grad a: everyone 's adjusting their microphones , . and then just close enough so that you get good volume . so , wearing it right about here seems to be about the right way to do it . professor g: i remember when i was when i used , , a prominent laboratory 's , speech recognizer about , this was , boy , this was a while ago , this was about twelve years ago . and , they were perturbed with me because i was breathing in instead of breathing out . and they had models for they had markov models for br breathing out but they did n't have them for breathing in . postdoc f: that 's interesting . , what i wondered is whether it 's possible to have to maybe use the display at the beginning to be able to judge how correctly , have someone do some routine whatever , and then see if when they 're breathing it 's showing . grad a: and so , i ' ve sat here and watched sometimes the breathing , and the bar going up and down , and i ' m thinking , i could say something , but i do n't want to make people self - conscious . stop breathing ! professor g: you 're not gon na get it perfect . and you can do some , , first - order thing about it , which is to have people move it , a away from being just directly in front of the middle but not too far away . professor g: and then , there 's not much because you ca n't al , interfere w you ca n't fine tune the meeting that much , . it 's postdoc f: it just seems like i if something l simple like that can be tweaked and the quality goes , , dramatically up , then it might be worth doing . grad a: and then also the position of the mike also . if it 's more directly , you 'll get better volume . so so , like , yours is pretty far down below your mouth . postdoc f: my my feedback from the transcribers is he is always close to crystal clear and just fan fantastic to postdoc f: , you , . you 're you 're also , your volume is greater . but but still , they say phd b: one more remark , concerning the sri recognizer . it is useful to transcribe and then ultimately train models for things like breath , and also laughter is very , very frequent and important to model . so , postdoc f: they are . they 're putting , so in curly brackets they put " inhale " or " breath " . postdoc f: it they and then in curly brackets they say " laughter " . now they 're not being awfully precise , m so they 're two types of laughter that are not being distinguished . one is when sometimes s someone will start laughing when they 're in the middle of a sentence . and and then the other one is when they finish the sentence and then they laugh . so , i did s i did some double checking to look through , you 'd need to have extra e extra complications , like time tags indicating the beginning and ending of the laughing through the utterance . postdoc f: and that and what they 're doing is in both cases just saying " curly brackets laughing " a after the unit . phd b: as as long as there is an indication that there was laughter somewhere between two words that 's sufficient , phd b: because actually the recognition of laughter once you kn , is pretty good . so as long as you can stick a , a t a tag in there that indicates that there was laughter , postdoc f: then and let me ask y and i got ta ask you one thing about that . postdoc f: so , if they laugh between two words , you 'd get it in between the two words . but if they laugh across three or four words you get it after those four words . does that matter ? phd b: , the thing that you is hard to deal with is whe when they speak while laughing . , and that 's , i do n't think that we can do very with that . so but , that 's not as frequent as just laughing between speaking , grad a: so are do you treat breath and laughter as phonetically , or as word models , or what ? phd b: we tried both . , currently , we use special words . there was a there 's actually a word for , it 's not just breathing but all kinds of mouth phd b: same thing ? . you ha . and each of these words has a dedicated phone . phd b: so the so the mouth noise , word has just a single phone , that is for that . grad a: right . so in the hybrid system we could train the net with a laughter phone and a breath sound phone . professor g: , it 's always the same thing . , you could say , let we now think that laughter should have three sub - units in the three states , different states . and then you would have three , , it 's u phd b: they actually are , it 's just a single , s , a single phone in the pronunciation , but it has a self - loop on it , so it can phd b: we train it like any other word . we also tried , absorbing these , both laughter and actually also noise , and , yes . anyway . we also tried absorbing that into the pause model , the model that matches the between words . and , it did n't work as . so . grad a: ok . can you hand me your digit form ? wanna mark that you did not read digits . postdoc f: you you did get me to thinking about i ' m not really which is more frequent , whether f laughing it may be an individual thing . some people are more prone to laughing when they 're speaking . grad a: i was noticing that with dan in the one that we , we hand tran hand - segmented , professor g: i ' m it 's very individual . and and one thing that c that we 're not doing , is we 're not claiming to , get be getting a representation of mankind in these recordings . we have this very , very tiny sample of professor g: so , who knows . why don why do n't we just since we 're on this vein , why do n't we just continue with , what you were gon na say about the transcriptions and ? postdoc f: , the i ' m really very for i ' m extremely fortunate with the people who , applied and who are transcribing for us . they are , really perceptive and very , and i ' m not just saying that cuz they might be hearing this . postdoc f: , i know . i am i ' m serious . they 're just super . so i , e , i brought them in and , trained them in pairs because people can raise questions postdoc f: , i the they think about different things and they think of different and , i trained them to , f on about a minute or two of the one that was already transcribed . this also gives me a sense of , i can use that later , with reference to inter - coder reliability issues . but the main thing was to get them used to the conventions and , the idea of the th the size of the unit versus how long it takes to play it back so these th calibration issues . and then , set them loose and they 're they all have e a already background in using computers . they 're , they 're trained in linguistics . postdoc f: they got , they 're very perce they 'll so one of them said " , he really said " n " , not really " and " , so what should i do with that ? " and i said , " for our purposes , i do have a convention . if it 's an a noncanonical p " that one , we , with eric 's work , i figure we can just treat that as a variant . but i told them if there 's an obvious speech error , like i said in one thing , and i gave my example , like i said , " microfon " in instead of " microphone " . did n't bother i knew it when i said it . i remember s thinking " , that 's not correctly pronounced " . but it but it 's not worth fixing cuz often when you 're speaking everybody knows what you mean . postdoc f: but i have a convention that if it 's a noncanonical pronunciation a speech error with , wi within the realm of resolution that you can tell in this native english american english speaker , that i did n't mean to say " microfon . " then you 'd put a little tick at the beginning of the word , and that just signals that , this is not standard , and then in curly brackets " pron { nonvocalsound } error " . and , and other than that , it 's w word level . but , the fact that they noticed , the " nnn " . " he said " nnn " , not " and " . what shall i do with that ? " , they 're very perceptive . and and s several of them are trained in ipa . c they really could do phonetic transcription if we wanted them to . professor g: right . , it might be something we 'd wanna do with some , s small subset of the whole thing . postdoc f: and i ' m also thinking these people are a terrific pool . , if , so i told them that , we if this will continue past the end of the month and i also m i think they know that the data p source is limited and i may not be able to keep them employed till the end of the month even , although i hope to . professor g: the other thing we could do , actually , is , use them for a more detailed analysis of the overlaps . postdoc f: and , that 'd be so super . they would be so s so terrific . grad a: , this was something that we were talking about . we could get a very detailed overlap if they were willing to transcribe each meeting four or five times . right ? one for each participant . so they could by hand professor g: , that 's one way to do it . but i ' ve been saying the other thing is just go through it for the overlaps . professor g: given that y and do so instead of doing phonetic , transcription for the whole thing , which we know from the steve 's experience with the switchboard transcription is , very time - consuming . and and , it took them i how many months to do to get four hours . and so that has n't been really our focus . , we can consider it . but , the other thing is since we ' ve been spending so much time thinking about overlaps is maybe get a much more detailed analysis of the overlaps . but anyway , i ' m open to c our consideration . professor g: i do n't wanna say that by fiat . i ' m open to every consideration of what are some other kinds of detailed analysis that would be most useful . i this year we actually , can do it . professor g: it 's a we have due to @ variations in funding we have we seem to be doing , very on m money for this year , and next year we may have much less . professor g: , calendar year two thousand one . so it 's , it 's we do n't wanna hire a bunch of people , a long - term staff , professor g: because the funding that we ' ve gotten is a big chunk for this year . but having temporary people doing some specific thing that we need is actually a perfect match to that , funding . postdoc f: wonderful . and then school will start in the sixt on the sixteenth . some of them will have to cut back their hours at that point . postdoc f: but { nonvocalsound } some of them are . , why do i would n't say forty - hour weeks . no . but what is , i should n't say it that way because { nonvocalsound } that does sound like forty - hour weeks . i th i would say they 're probably { nonvocalsound } they do n't have o they do n't have other things that are taking away their time . postdoc f: no . you 're right . it 's i it would be too taxing . but , they 're putting { nonvocalsound } in a lot of and and i checked them over . i have n't checked them all , but just spot - checking . they 're fantastic . professor g: i remember when we were transcribing berp , , ron , volunteered to do some of that . and , he was the first he did was transcribing chuck . and he 's saying " you , i always thought chuck spoke really . " postdoc f: , and i also thought , y liz has this , , and i do also , this interest in the types of overlaps that are involved . these people would be { nonvocalsound } great choices for doing coding of that type if we wanted , grad a: it would also be interesting to have , a couple of the meetings have more than one transcriber do , cuz i ' m curious about inter - annotator agreement . postdoc f: th - that 'd be that 's a good idea . , there 's also , the e in my mind , a an - andreas was leading to this topic , the idea that , we have n't yet seen the type of transcript that we get from ibm , and it may just be , pristine . but on the other hand , given the lesser interface cuz this is , we ' ve got a good interface , we ' ve got great headphones , m professor g: it could be that they will theirs will end up being a fir first pass . professor g: maybe an elaborate one , cuz again they probably are gon na do these alignments , which will also clear things up . postdoc f: that 's that 's true . al - although you have to s do n't you have to start with a close enough approximation { nonvocalsound } of the verbal part { nonvocalsound } to be able to ? professor g: , tha that 's debatable . , so the argument is that if your statistical system is good it will , clean things up . so it 's got its own objective criterion . and , so in principle you could start up with something that was rough , to give an example of , something we used to do , at one point , back when chuck was here in early times , is we would take , da take a word and , have a canonical pronunciation and , if there was five phones in a word , you 'd break up the word , into five equal - length pieces which is completely gross . professor g: , th the timing is off all over the place in just about any word . but it 's o k . you start off with that and the statistical system then aligns things , and eventually you get something that does n't really look too bad . professor g: so so using a good aligner , actually can help a lot . but , , they both help each other . if you have a if you have a better starting point , then it helps the aligner . if you have a good alignment , it helps the , th the human in taking less time to correct things . so so postdoc f: excellent . i there 's another aspect , too , and i , this is very possibly a different , topic . but , { nonvocalsound } , just let me say with reference to this idea of , higher - order organization within meetings . so like in a , the topics that are covered during a meeting with reference to the other , uses of the data , so being able to find where so - and - so talked about such - and - such , then , e , i did a rough pass { nonvocalsound } on encoding , like , episode - like level things on the , transcribed meeting already transcribed meeting . and i if , where { nonvocalsound } that i if that 's something that we wanna do with each meeting , like a , it 's like a manifest , when you get a box full of , or if that 's , i i what , level of detail would be most useful . i i if that 's something that i should do when i look over it , or if we want someone else to do , or whatever . but this issue of the contents of the meeting in an outline form . grad a: it just whoever is interested can do that . , so if someone wants to use that data professor g: , was p , the thing i ' m concerned about is we wanted to do these digits and i have n't heard , from jose yet . grad a: we could skip the digits . we do n't have to read digits each time . professor g: so so i 'd like to do that . but , do you , maybe , ? did you prepare some whole thing you wanted us just to see ? phd d: it 's fast , because , i have the results , of the study of different energy without the law length . , , in the measurement , the average , dividing by the , variance . , i th i the other , the last w , meeting , i if you remain we have problem to with the parameter with the representations of parameter , because the valleys and the peaks in the signal , look like , it does n't follow to the energy in the signal . and it was a problem , with the scale . professor g: so that 's that 's enough then . no , that there 's no point in going through all of that if that 's the bottom line , really . so , we have to start , there 's two suggestions , really , which is , what we said before is that , professor g: it looks like , at least that you have n't found an obvious way to normalize so that the energy is anything like a reliable , indicator of the overlap . , i ' m still a little f think that 's a little funny . these things l @ seems like there should be , but you do n't want to keep , keep knocking at it if it 's if you 're not getting any result with that . but , the other things that we talked about is , pitch - related things and harmonicity - related things , so which we thought also should be some a reasonable indicator . but , a completely different tack on it wou is the one that was suggested , by your colleagues in spain , which is to say , do n't worry so much about the , features . that is to say , use , as you 're doing with the speech , nonspeech , use some very general features . and , then , look at it more from the aspect of modeling . , have a couple markov models and , try to indi try to determine , w when is th when are you in an overlap , when are you not in an overlap . and let the , statistical system determine what 's the right way to look at the data . i , it would be interesting to find individual features and put them together . that you 'd end up with a better system overall . but given the limitation in time and given the fact that javier 's system already exists doing this thing , but , its main limitation is that , again , it 's only looking at silences which would maybe that 's a better place to go . phd d: i that , the possibility , can be that , thilo , working , with a new class , not only , nonspeech and speech , but , in the speech class , dividing , speech , of from a speaker and overlapping , to try to do , a fast , experiment to prove that , nnn , this fea , general feature , can solve the problem , and wh what nnn , how far is phd d: and , i have prepared the pitch tracker now . and i hope the next week i will have , some results and we will show we will see , the parameter the pitch , tracking in with the program . professor g: ha - h have you ever looked at the , javier 's , speech segmenter ? . maybe m you could , you kn show thilo that . cuz again the idea is there the limitation there again was that he was only using it to look at silence as a as a p putative split point between speakers . but if you included , broadened classes then in principle maybe you can cover the overlap cases . phd c: , but i ' m not too if we can really represent overlap with the s detector i used up to now , grad a: it does n't have the same gaus - , h m modeling , which is a drawback . but , phd d: javier you mean ja - , javier program ? no , javier di does n't worked with , a markov grad a: it 's just , that i it he has the two - pass issue that what he does is , as a first pass he p he does , a at where the divisions might be and he overestimates . and that 's just a data reduction step , so that you 're not trying at every time interval . and so those are the putative places where he tries . and right now he 's doing that with silence and that does n't work with the meeting recorder . so if we used another method to get the first pass , it would probably work . grad a: it 's a good method . as long as the len as long the segments are long enough . that 's the other problem . professor g: o - k ok . so let me go back to what you had , though . the other thing one could do is could n't , it 's so you have two categories and you have markov models for each . could n't you have a third category ? so you have , nonspeech , single - person speech , and multiple - person speech ? postdoc f: he has this on his board actually . do n't you have , like those several different categories on the board ? phd c: i ' m not . about , adding , another class too . but it 's not too easy , the transition between the different class , to model them in the system i have now . but it could be possible , in principle . professor g: , i this is all pretty gross . , the th the reason why , i was suggesting originally that we look at features is because , we 're doing something we have n't done before , we should at least look at the space and understand it seems like if two people two or more people talk at once , it should get louder , and , there should be some discontinuity in pitch contours , professor g: and , there should overall be a , smaller proportion of the total energy that is explained by any particular harmonic sequence in the spectrum . so those are all things that should be there . so far , , jose has been , i was told i should be calling you pepe , but by your friends , anyway , the has , been exploring , e largely the energy issue as with a lot of things , it is not , like this , it 's not as simple as it sounds . and then there 's , is it energy ? is it log energy ? is it lpc residual energy ? is it is it , delta of those things ? , what is it no , just a simple number absolute number is n't gon na work . so it should be with compared to what ? should there be a long window for the normalizing factor and a short window for what you 're looking at ? or , how b short should they be ? th he 's been playing around with a lot of these different things and so far at least has not come up with any combination that really gave you an indicator . i still have a hunch that there 's it 's in there some place , but it may be given that you have a limited time here , it just may not be the best thing to focus on for the remaining of it . professor g: so pitch - related and harmonic - related , i ' m somewhat more hopeful for it . but it seems like if we just wanna get something to work , that , their suggestion of th - they were suggesting going to markov models , but in addition there 's an expansion of what javier did . and one of those things , looking at the statistical component , professor g: even if the features that you give it are maybe not ideal for it , it 's just this general filter bank or cepstrum , eee it 's in there somewhere probably . phd d: but , what did you think about the possibility of using the javier software ? , the bic criterion , the t to train the gaussian , using the mark , by hand , to distinguish be mmm , to train overlapping zone and speech zone . , i that an interesting , experiment , could be , th , to prove that , mmm , if s we suppose that , the first step , the classifier what were the classifier from javier or classifier from thilo ? w what happen with the second step ? , what happen with the , clu the , the clu the clustering process ? using the gaussian . phd d: , that is enough , to work , to , separate or to distinguish , between overlapping zone and , speaker zone ? because th if we , nnn , develop an classifier and the second step does n't work , we have another problem . grad a: i . i had tried doing it by hand at one point with a very short sample , grad a: and it worked pretty , but i have n't worked with it a lot . so what i d i took a hand - segmented sample and i added ten times the amount of numbers at random , and it did pick out pretty good boundaries . phd d: but it 's possible with my segmentation by hand that we have information about the overlapping , grad a: right . so if we fed the hand - segmentation to javier 's and it does n't work , then we know something 's wrong . phd d: the n n . the demonstration by hand . segmentation by hand i is the fast experiment . ###summary: the berkeley meeting recorder group talked about the ongoing transcription effort and issues related to the transcriber tool , which despite its limitations for capturing tight time markings for overlapping speech , will continue to remain in use. speaker mn014 explained his efforts to pre-segment the signal into speech and non-speech portions for facilitating transcriptions. recording equipment and procedures were discussed , with a focus on audible breathing and the need for standards in microphone wear and use. and , finally , it was determined that speaker mn005's efforts to detect speaker overlap using energy should instead be focussed on pitch- and harmonicity-related features or be guided by a non-featural , statistical approach , i.e . via the use of markov models. in the interest of time , it was decided that the group should continue using the existing transcriber tool and perform a forced alignment on the close-talking microphones that will , it is hoped , help to recover some of the time information indicating where different speaker overlaps occurred in the signal. a meeting will be arranged with nist to decide on a common standard and format for doing transcriptions. one or two meetings will be assigned to multiple transcribers to check for inter-annotator agreement. to cut down on audible breaths during recordings , the group will institute some level of standards for microphone wear and use. speaker mn005 will feed his hand-segmented data into the speech segmenter developed by javier to train it to identify different types of speech ( i.e . that of single versus multiple speakers ) , as well as focussing on pitch- and harmonicity-related features for identifying overlapping speech. there is no channel identifier to help in encoding speaker overlaps. speech uttered while laughing is problematic for asr. so far , speaker mn005's attempts to detect speaker overlap have been unsuccessful , as it has not been possible to normalize energy as a reliable indicator of overlap. speaker mn014's efforts to detect speech/non-speech portions in the mixed signal ( using an hmm-based detector with gaussian mixtures ) have produced pre-segmentations that facilitate the transcription effort. speaker mn014 also trained the system to identify speech from loud versus quiet speakers. such pre-segmentation modifications allow the experimenter to specify the minimum length of speech and silence portions desired , and also facilitate the identification of pauses and utterance boundaries. the transcriber pool is making quick progress , and may be used in the future to perform other types of coding , e.g . a more detailed analysis of speaker overlap. transcribers are coding non-speech gestures , such as audible breaths and laughter , both of which are useful for improving recognition results. recent modifications to the transcriber tool allow transcribers to listen to speech from different channels , as well as helping to preserve portions of overlapping speech , and enabling the creation of different output files for each channel for a cleaner and more segmentable transcript. the praat software package was discussed as an alternative transcription tool capable of representing multiple channels of speech. cross-correlation was discussed as a means of enabling speaker identification , and may be integrated into future work.
11
professor c: there 's another i . it starts with a p . i forget the word for it , but it 's typically when you 're ab r starting around forty for most people , it starts to harden and then it 's just harder for the lens to shift things and th the symptom is typically that you have to hold further away to see it . professor c: , m my brother 's a gerontological psychologist and he came up with an a body age test which gets down to only three measurements that are good enough st statistical predictors of all the rest of it . and one of them is the distance that you have to hold it at . grad a: ok . so . this time the form discussion should be very short , right ? professor c: and she 'll be most interested in that . , she 's probably least involved in the signal - processing so maybe we can just , i do n't think we should go though an elaborate thing , but jose and i were just talking about the { nonvocalsound } , speech e energy thing , professor c: and i we did n't talk about the derivatives . but , the i if you do n't mind my speaking for you for a bit , . right now , that he 's not really showing any distinction , but we discussed a couple of the possible things that he can look at . and one is that this is all in log energy and log energy is compressing the distances between things . another is that he needs to play with the different temporal sizes . he was he was taking everything over two hundred milliseconds , and he 's going to vary that number and also look at moving windows , as we discussed before . and and the other thing is that the doing the subtracting off the mean and the variance in the and dividing it by the standard deviation in the log domain , may not be the right thing to do . phd d: are these the long term means ? like , over the whole , the means of what ? professor c: and so i his he 's making the constraint it has to be at least two hundred milliseconds . and so you take that . and then he 's measuring at the frame level still at the frame level , of what professor c: and then just normalizing with that larger amount . and but one thing he was pointing out is when he looked at a bunch of examples in log domain , it is actually pretty hard to see the change . and you can see that , because of j of just putting it on the board that if you have log - x plus log - x , that 's the log of x plus the log of two phd d: but you could do like a c d f there instead ? , we that the distribution here is normally . professor c: , but also u a good first indicator is when the researcher looks at examples of the data and can not see a change in how big the signal is , when the two speaker professor c: then , that 's a problem right there . so . you should at least be able , doing casual looking and can get the sense , " hey , there 's something there . " and then you can play around with the measures . and when he 's looking in the log domain he 's not really seeing it . so . and when he 's looking in straight energy he is , so that 's a good place to start . so that was the discussion we just had . the other thing actually we ca had a question for adam in this . , when you did the sampling ? over the speech segments or s or sampling over the individual channels in order to do the e the amplitude equalization , did you do it over just the entire everything in the mike channels ? professor c: right , ok . so then that means that someone who did n't speak very much would be largely represented by silence . and someone who would be so the normalization factor probably is i is grad a: , this was quite quick and dirty , and it was just for listening . and for listening it seems to work really . so . professor c: right . so th so there there 's a good chance then given that different people do talk different amounts that there is still a lot more to be gained from gain norm normalization with some sort professor c: if we can figure out a way to do it . but we were that in addition to that there should be s related to pitch and harmonics and . so we did n't talk about the other derivatives , but again just looking at , liz has a very good point , that it would be much more graphic just to show , actually , you do have some distributions here , for these cases . you have some histograms , and , they do n't look very separate . separated . grad a: except that it 's hard to judge this because the they 're not normalized . it 's just number of frames . but , even so . phd d: w , what i meant is , even if you use linear , raw measures , like raw energy or whatever , phd d: maybe we should n't make any assumptions about the distribution 's shape , and just use , use the distribution to model the mean , or what y , rather than the mean take some professor c: but and so in these he 's got that . he 's got some pictures . but he does n't in the he i just in derivatives , but not in the but he d but he does n't phd d: right . so , we what they look like on the , tsk for the raw . professor c: but he did n't h have it for the energy . he had it for the derivatives . professor c: did you have this thing , for just the l r the unnormalized log energy ? ok . so she 's right . professor c: , even before you get the scatter plots , just looking at a single feature , looking at the distribution , is a good thing to do . phd e: catal - combining the different possibilities of the parameters . i the scatter plot combining different n two combination . professor c: , let 's start with the before we get complicated , let 's start with the most basic wh thing , which is we 're arguing that if you take energy if you look at the energy , that , when two people are speaking at the same time , usually there 'll be more energy than when one is right ? that 's that hypothesis . professor c: and the first way you 'd look at that , s she 's , right , is that you would just take a look at the distribution of those two things , much as you ' ve plotted them here , but just do it in this case you have three . you have the silence , and that 's fine . so , with three colors or three shades or whatever , just look at those distributions . and then , given that as a base , you can see if that gets improved , or worsened by the looking at regular energy , looking at log energy , we were just proposing that maybe it 's , it 's harder to see with the log energy , and also these different normalizations , does a particular choice of normalization make it better ? but i had maybe made it too complicated by suggesting early on , that you look at scatter plots because that 's looking at a distribution in two dimensions . let 's start off just in one , with this feature . that 's probably the most basic thing , before anything very complicated . and then we w we 're that pitch - related things are going to be a really likely candidate to help . professor c: but since your intuition from looking at some of the data , is that when you looked at the regular energy , that it did usually go up , when two people were talking , that 's , you should be able to come up with a measure which will match your intuition . professor c: and she 's right , that a that having a having this table , with a whole bunch of things , with the standard deviation , the variance and , it 's harder to interpret than just looking at the same picture you have here . phd e: but it it 's curious but i f i found it in the mixed file , in one channel that in several e several times you have an speaker talking alone with a high level of energy phd e: in the middle a zone of overlapping with mmm less energy and come with another speaker with high energy and the overlapping zone has less energy . professor c: but , the qu so they 'll be this is i w want to point to visual things , but they there 'll be time there 'll be overlap between the distributions , but the question is , " if it 's a reasonable feature , there 's some separation . " grad a: what you would imagine eventually , is that you 'll feed all of these features into some discriminative system . and so even if one of the features does a good job at one type of overlap , another feature might do a good job at another type of overlap . professor c: right . the reason i had suggested the scatter f p features is i used to do this a lot , when we had thirteen or fifteen or twenty features to look at . professor c: because something is a good feature by itself , you do n't really know how it 'll behave in combination and so it 's to have as many together at the same time as possible in some reasonable visual form . there 's graphic things people have had sometimes to put together three or four in some funny way . but it 's true that you should n't do any of that unless that the individual ones , at least , have some hope phd d: , especially for normalizing . , it 's really important to pick a normalization that matches the distribution for that feature . and it may not be the same for all the types of overlaps or the windows may not be the same . e actually , i was wondering , right now you 're taking a all of the speech , from the whole meeting , and you 're trying to find points of overlap , but we do n't really know which speaker is overlapping with which speaker , right ? so another way would just be to take the speech from just , say , morgan , and just jane and then just their overlaps , like but by hand , by cheating , and looking at , if you can detect something that way , because if we ca n't do it that way , there 's no good way that we 're going to be able to do it . phd d: that , there might be something helpful and cleaner about looking at just individuals and then that combination alone . plus , it has more elegant e the m the right model will be easier to see that way . so if i , if you go through and you find adam , cuz he has a lot of overlaps and some other speaker who also has e enough speech and just look at those three cases of adam and the other person and the overlaps , maybe and just look at the distributions , maybe there is a clear pattern but we just ca n't see it because there 's too many combinations of people that can overlap . postdoc b: it 's to start with it 's your idea of simplifying , starting with something that you can see without the extra layers of phd d: cuz if energy does n't matter there , like i do n't think this is true , but what if postdoc b: but just simple case and the one that has the lot of data associated with it . phd d: what if it 's the case that when two people overlap they equate their , there 's a conservation of energy and everybody both people talk more softly ? i do n't think this happens . phd d: there are there are different types , and within those types , like as jose was saying , that sounded like a backchannel overlap , meaning the kind that 's a friendly encouragement , like " - . " , great ! , ! and it does n't take you do n't take the floor . , but , some of those , as you showed , can be discriminated by the duration of the overlap . it actually the s new student , don , who adam has met , and he was at one of our meetings he 's getting his feet wet and then he 'll be starting again in mid - january . he 's interested in trying to distinguish the types of overlap . i if he 's talked with you yet . but in honing in on these different types phd d: so it might be something that we can help by categorizing some of them and then , look at that . professor c: because it would be the quickest thing for him to do . he could you see , he already has all his in place , he has the histogram mechanism , he has the that subtracts out and all he has to do is change it from log to plain energy and plot the histogram and look at it . and then he should go on and do the other bec but but this will phd d: , no . i did n't mean that for you to do that , but i was thinking if don and i are trying to get categories and we label some data for you , and we say this is what we think is going so you do n't have to worry about it . and here 's the three types of overlaps . and we 'll do the labelling for you . phd d: then maybe you can try some different things for those three cases , and see if that helps , or phd e: this is the thing i comment with you before , that we have a great variation of th situation of overlapping . and the behavior for energy is , log energy , is not the same all the time . professor c: but i was just saying that right now from the means that you gave , i do n't have any sense of whether even , there are any significant number of cases for which there is distinct and i would imagine there should be some , there should be the distributions should be somewhat separated . and i would still that if they are not separated , that there 's some there 's most likely something wrong in the way that we 're measuring it . , but , i would n't expect that it was very common overall , that when two people were talking at the same time , that it would that it really was lower , although sometimes , as you say , it would . so . phd d: , no , that was that was a jok or a , a case where you would never know that unless you actually go and look at two individuals . grad a: right . mind if i turned that light off ? the flickering is annoying me . phd d: it might the case , though , that the significant energy , just as jose was saying , comes in the non - backchannel cases . because in back most people when they 're talking do n't change their own energy when they get a backchannel , cuz they 're not really predicting the backchannel . and sometimes it 's a nod and sometimes it 's an " - " . and the " - " is really usually very low energy . so maybe those do n't actually have much difference in energy . but all the other cases might . phd d: and the backchannels are easy to spot s in terms of their words or , just listen to it . phd d: , even if you take the log , you can your model just has a more sensitive measures . grad a: , but tone might be very , you 're " - " tone is going to be very different . grad a: you could imagine doing specialized ones for different types of backchannels , if you could if you had a good model for it . your " - " detector . professor c: if if you 're a i my point is , if you 're doing essentially a linear separation , taking the log first does make it harder to separate . so it 's so , if you i so i if there close to things it does it 's a nonlinear operation that does change the distinction . if you 're doing a non if you 're doing some fancy thing then and right now we 're essentially doing this linear thing by looking across here and saying we 're going to cut it here . and that 's the indicator that we 're getting . but anyway , we 're not disagreeing on any of this , we should look at it more finely , but that this often happens , you do fairly complicated things , and then you stand back from them and you realize that you have n't done something simple . so , if you generated something like that just for the energy and see , and then , a as liz says , when they g have smaller , more coherent groups to look at , that would be another interesting thing later . and then that should give us some indication between those , should give us some indication of whether there 's anything to be achieved f from energy . and then you can move on to the more { nonvocalsound } pitch related . professor c: but then the have you started looking at the pitch related , or ? pitch related ? phd e: i ' m preparing the program but i do n't begin because i saw your email phd e: and i agree with you it 's better to i suppose it 's better to consider the energy this parameter bef professor c: , that 's not what i meant . no , no . i , we certainly should see this but i that the harm i certainly was n't saying this was better than the harmonicity and pitch related things i was just saying phd e: i understood that i had to finish by the moment with the and concentrate my energy in that problem . professor c: ok . but , like , all these derivatives and second derivatives and all these other very fancy things , i would just look at the energy and then get into the harmonicity as a suggestion . professor c: so maybe since w we 're trying to compress the meeting , i know adam had some form he wanted to talk about and did you have some ? postdoc b: i wanted to ask just s something on the end of this top topic . so , when i presented my results about the distribution of overlaps and the speakers and the profiles of the speakers , at the bottom of that i did have a proposal , and i had plan to go through with it , of co coding the types of overlaps that people were involved in s just with reference to speaker style so , with reference and i said that on my in my summary , postdoc b: that so it 's like people may have different amounts of being overlapped with or overlapping postdoc b: but that in itself is not informative without knowing what types of overlaps they 're involved in so i was planning to do a taxonomy of types overlaps with reference to that . postdoc b: so , but it 's like it sounds like you also have something in that direction . phd d: we have nothing , we got his environment set up . he 's he 's a double - e . so . it 's mostly that , if we had to label it ourselves , we would or we 'd have to , to get started , but if it it would be much better if you can do it . you 'd be much better at doing it also because , i ' m not i do n't have a good feel for how they should be sorted out , phd d: and i really did n't wanna go into that if i did n't have to . so if if you 're w willing to do that or grad a: it would be interesting , though , to talk , maybe not at the meeting , but at some other time about what are the classes . phd d: because you can read the literature , but i how it 'll turn out and , it 's always an interesting question . postdoc b: it seems like we also s with reference to a purpose , too , that we 'd want to have them coded . phd d: and we 'd still have some funding for this project , like probably , if we had to hire some like an undergrad , because don is being covered half time on something else , he we 're not paying him the full ra - ship for all the time . so . if we got it to where we wanted we needed someone to do that i do n't think there 's really enough data where postdoc b: , i see this as a prototype , to use the only the already transcribed meeting as just a prototype . phd e: another e m besides the class of overlap , the duration . because is possible some s some classes has a type of a duration , a duration very short when we have overlapping with speech . phd e: is possible to have . and it 's interesting , to consider the window of normalization , normalization window . because if we have a type of , a overlap , backchannel overlap , with a short duration , is possible to normali i that if we normalize with consider only the window by the left ri side on the right side overlapping with a very a small window the if the fit of normalization is mmm bigger in that overlapping zone very short phd e: i me i understand . that you have a backchannel , you have a overlapping zone very short and you consider n all the channel to normalize this very short " mmm - " and the energy is not height if you consider all the channel to normalize and the channel is mmm bigger compared with the overlapping duration , the effect is mmm stronger that the e effect of the normalization with the mean and the variance is different that if you consider only a window compared with the n the duration of overlapping . phd d: it 's a sliding window , so if you take the measure in the center of the overlapped piece , there 'd better be some something . but if your window is really huge then you 're right you wo n't even phd e: , this is the idea , to consider only the small window near the overlapping zone . phd d: the portion of the backchannel wo n't effect anything . but you , you should n't be more than like you should definitely not be three times as big as your backchannel . then you 're gon na w have a wash . and hopefully it 's more like on the order of professor c: , the fact that this gain thing was crude , and the gain wh if someone is speaking relatively at consistent level , just to give a an extreme example , all you 're doing is compensating for that . and then you still s and then if you look at the frame with respect to that , it still should change phd d: , it depends how different your normalization is , as you slide your window across . that 's something we . grad a: , one is your analysis window and then the other is any normalization that you 're doing . phd d: and . but it is definitely true that we need to have the time marks , and i was assuming that will be inherited because , if you have the words and they 're roughly aligned in time via forced alignment or whatever we end up using , then , this student and i would be looking at the time marks postdoc b: good . so , it would n't be i was n't planning to label the time marks . grad a: if it 's not hand - marked then we 're not going to get the times . phd d: , it 's something that w , we would n't be able to do any work without a forced alignment anyway , phd d: so somehow if once he gets going we 're gon na hafta come up with one professor c: again for the close mike , we could come up take a s take the switchboard system , grad a: it 'd be worth a try . it would be interesting to see what we get . phd d: cuz there 's a lot of work you ca n't do without that , how would you you 'd have to go in and measure every start and stop point next to a word phd d: is y if you 're interested in anything to do with words . anyway so that 'd be great . professor c: there 's something we should talk about later but maybe not just now . but , should talk about our options as far as the transcription grad a: nope . one 's a digit form , one 's a speaker form . so one is a one time only speaker form and the other is the digits . grad a: this is just the suggestion for what the new forms would look like . so , they incorporate the changes that we talked about . postdoc b: date and time . why did you switch the order of the date and time fields ? this is rather a low - level , but postdoc b: this is this is rather a low level question , but it used to be date came first . grad a: , because the user fills out the first three fields and i fill out the rest . postdoc b: , how would the how would the user know the time if they did n't know the date ? grad a: it 's an interesting observation , but it was intentional . because the date is when you actually read the digits and the time and , excuse me , the time is when you actually read the digits , but i ' m filling out the date beforehand . if you look at the form in front of you ? that you 're going to fill out when you read the digits ? you 'll see i ' ve already filled in the date but not the time . postdoc b: i always assumed so the time is supposed to be pretty exact , because i ' ve just been taking beginning time of the meeting . grad a: the the reason i put the time in , is so that the person who 's extracting the digits , meaning me , will know where to look in the meeting , to try to find the digits . phd d: so you should call it , like , " digits start time " . or . postdoc b: , i was saying if we started the meeting at two thirty , i 'd put two thirty , and i d e everyone was putting two thirty , postdoc b: and i did n't realize there was " i ' m about to read this and i should " grad a: actually it 's about one third each . about one third of them are blank , about one third of them are when the digits are read , and about one third of them are when the meeting starts . postdoc b: ei - either that or maybe you could maybe write down when people start reading digits on that particular session . grad a: but if i ' m not at the meeting , i ca n't do that . grad a: , but that is the reason name , email and time are where they are . postdoc b: actually you could that does raise another question , which is why is the " professional use only " line not higher ? why does n't it come in at the point of date and seat ? . because we 're filling in other things . postdoc b: , because if y your professional use , you 're gon na already have the date , and the s postdoc b: i ' m comparing the new one with the old one . this is the digit form . grad a: the digit form does n't have a " for official use only " line . it just has a line , which is what you 're supposed to read . grad a: so on the digits form , everything above the line is a fill - in form postdoc b: alright s but i did n't mean to derail our discussion here , so you really wanted to start with this other form . grad a: no , either way is fine you just started talking about something , and i did n't know which form you were referring to . postdoc b: alright , i was comparing so th this is so i was looking at the change first . so it 's like we started with this and now we ' ve got a new version of it wi with reference to this . so the digit form , we had one already . now the f the fields are slightly different . professor c: so the main thing that the person fills out is the name and email and time ? you do the rest ? postdoc b: what and there 's an addition of the native language , which is a bit redundant . this one has native language and this one does too . grad a: that 's because the one , the digit form that has native language is the old form not the new form . postdoc b: ! . , . there we go . , . i 'll catch up here . ok , i see . grad a: this was the problem with these categories , i picked those categories from timit . i what those are . phd d: actually , the only way i know is from working with the database and having to figure it out . phd e: is mean my native language spanish ? the original is the center of spain and the beca grad a: , you could call it whatever you want . for the foreign language we could n't classify every single one . so left it blank and you can put whatever you want . phd e: because is different , the span - the spanish language from the north of spain , of the south , of the west and the but . grad a: so i ' m not what to do about the region field for english variety . , when i wrote i was writing those down , i was thinking , " , these are great if you 're a linguist " . but i how to categorize them . grad a: so i my only question was if you were a south midland speaking region , person ? would it ? is that what you would call yourself ? professor c: , if you 're talking if you 're thinking in terms of places , as opposed to names different peop names people have given to different ways of talking , i would think north midwest , and south midwest would be more common than saying midland , right , i went to s phd d: . now the usage maybe we can give them a li like a little map ? with the regions and they just no , i ' m serious . phd d: there 's no figure . just a little , it does n't have all the detail , but you postdoc b: , i was thinking you could have ma multiple ones and then the amount of time postdoc b: so , roughly . so . you could say , " ten years on the east coast , five years on the west coast " or other . grad a: , we we do n't want to get that level of detail at this form . that 's alright if we want to follow up . but . phd d: i as i said , i do n't think there 's a huge benefit to this region thing . it it gets the problem is that for some things it 's really clear and usually listening to it you can tell right away if it 's a new york or boston accent , but new york and boston are two , i they have the nyc , but new england has a bunch of very different dialects and so does s so do other places . grad a: , so i picked these regions cuz we had talked about timit , and those are right from timit . phd d: and so these would be satisfying like a speech research community if we released the database , but as to whether subjects know where they 're from , i ' m not because i know that they had to fill this out for switchboard . this is i almost exactly the same as switchboard regions phd d: or very close . and i how they filled that out . but th if midland , midland is the one that 's difficult i . phd d: also northwest you ' ve got oreg - washington and oregon now which y people if it 's western or northern . grad a: , i certainly do n't . , i was saying i do n't even i speak . grad a: northwest ? , so this is a real problem . i what to do about it . postdoc b: i would n't know how to characterize mine either . and and so i would think i would say , i ' ve got a mix of california and ohio . grad a: i c at the first level , we speak the same . our dialects or whatever you region are the same . but i what it is . phd d: but maybe that maybe we could leave this and see what people see what people choose and then let them just fill in if they do n't i what else we can do , cuz that 's north midland . postdoc b: i ' m wondering about a question like , " where are you from mostly ? " professor c: but i ' m s i ' m now that you mentioned it though , i am really am confused by " northern " . professor c: , if you 're in new england , that 's north . if you 're i if you 're professor c: maybe we should put a little map and say " put an x on where you 're from " , phd d: we we went around this and then a lot of people ended up saying that it grad a: , i like the idea of asking " what variety of english do you speak " as opposed to where you 're from because th if we start asking where we 're from , again you have to start saying , " , is that the language you speak or is that just where you 're from ? " phd d: it gives us good information on where they 're from , but that does n't tell us anything grad a: , i could try to put squeeze in a little map . there 's not a lot of r of room professor c: i 'd say , " boston , new york city , the south and regular " . grad a: of those , northern is the only one that i do n't even they 're meaning . phd d: so let 's make it up . s , who cares . right ? we can make up our own so we can say " northwest " , " rest of west " . west phd d: that 's , w it 's in it 's it 's harder in america anywhere else , . grad a: some of them are very obvious . if you if you talk to someone speaking with southern drawl , . phd d: and those people , if you ask them to self - identify their accent they know . postdoc b: , we ca , why ca n't we just say characterize something like char characterize your accent postdoc b: but someone from boston with a really strong coloration would know . and so would an r - less maine , phd d: because if you , then , ruling out the fact that you 're inept , if somebody does n't know , it probably means their accent is n't very strong compared to the midwest standard . professor c: , it was n't that long ago that we had somebody here who was from texas who was that he did n't have any accent left . and and had he had a pretty noticeable drawl . postdoc b: . i would say more sweepingly , " how would you characterize your accent ? " grad a: i if i read this form , they 're going to ask it they 're going to answer the same way if you say , " what 's variety of english do you speak ? region . " as if you say " what variety of region do you speak ? characterize your accent ? " they 're going to answer the same way . postdoc b: , i was not that so . i was suggesting not having the options , just having them grad a: , i see . what we talked about with that is so that they would understand the granularity . postdoc b: yes , but if , as liz is suggesting , people who have strong accents know that they do grad a: that 's what i had before , and you told me to list the regions to list them . professor c: last week i was r arguing for having it wide open , but then everybody said " , no , but then it will be hard to interpret because some people will say cincinnati and some will say ohio " . phd d: and would people no , what if we put in both ways of asking them ? so . one is region and the another one is " if you had to characterize yourself your accent , what would you say ? " postdoc b: that 's fine . they might say " other " for region because they what category to use phd d: it just and we might learn from what they say , as to which one 's a better way to ask it . professor c: but it says " variety " and then it gives things that e have american as one of the choices . but then it says " region " , but region actually just applies to , us , postdoc b: at the last meeting , my recollection was that we felt people would have less that there are so many types and varieties of these other languages and we are not going to have that many subjects from these different language groups and that it 's a huge waste of space . grad a: it just said region colon . and and that 's the best way to do it , because of the problems we 're talking about but what we said last week , was no , put in a list , so i put in a list . so should we go back to grad a: , certainly dropping " northern " is right , because none of us that is . phd d: cuz , and keeping " other " , and then maybe this north midland , we call it " north midwest " . south midwest , or just professor c: but there 's a town . in there . i forget what it is @ . phd d: colora , right . and then , the dropping north , so it would be western . it 's just one big shebang , where , you have huge variation in dialects , grad a: , i should n't say that . i have no clue . i was going to say the only one that does n't have a huge variety is new york city . but i have no idea whether it does or not . postdoc b: it does seem . i would think that these categories would be more w would be easier for an analyst to put in rather than the subject himself . professor c: a minute . where does d w where 's where does new york west of new york city and pennsylvania and professor c: pennsylvania . pennsylvania is not new england . and new jersey is not new england and maryland is not new england and none of those are the south . grad a: ok . so . another suggestion . rather than have circle fill in forms , say " region , open paren , e g southern comma western comma close paren colon . " professor c: that 's good . i like that . we 're all sufficiently tired of this that we 're agreeing with you . grad a: ok , we 'll do it that way . actually , i like that a lot . because that get 's at both of the things we were trying to do , the granularity , and the person can just self - assess and we do n't have to argue about what these regions are . postdoc b: that 's right . and it 's easy on the subjects . now i have one suggestion on the next section . so you have native language , you have region , and then you have time spent in english speaking country . now , i wonder if it might be useful to have another open field saying " which one parenthesis s paren closed parenthesis " . cuz if they spent time in britain and america it does n't have to be ex all exact , just in the same open field format that you have . postdoc b: with a with an s which one bmr009bdialogueact1468 249156 249299 b postdoc s -1 0 sss , optional s . bmr009cdialogueact1469 249564 24959 c professor s^bk^rt^tc -1 0 ok . bmr009edialogueact1470 249627 249641 e phd b -1 0 bmr009bdialogueact1471 249714 249723 b postdoc b% -1 0 yeah . bmr009cdialogueact1472 249833 250038 c professor qy^rt^t^tc -1 0 we done ? bmr009adialogueact1473 250075 250102 a grad s^aa -1 0 yep . bmr009bdialogueact1474 250121 250177 b postdoc s^aa -1 0 yeah , that 's good . bmr009cdialogueact1475 250193 250223 c professor s^bk -1 0 ok . bmr009cdialogueact1476 250286 250804 c professor fg|qr^rt -1 0 um s e any any other open mike topics or should we go right to the digits ? bmr009adialogueact1477 25086 251153 a grad fg|qy^rt +1 0 um , did you guys get my email on the multitrans ? bmr009adialogueact1478 251183 251192 a grad % -- -1 0 that bmr009adialogueact1479 251227 251248 a grad s^bk -1 0 ok . bmr009bdialogueact1480 251262 25133 b postdoc qh^ba -1 0 is n't that wonderful ! bmr009bdialogueact1481 25133 251366 b postdoc s^aa -1 0 yeah . bmr009adialogueact1483 251397 251817 a grad fg|s +1 yeah . so . i have a version also which actually displays all the channels . bmr009bdialogueact1482 251395 251442 b postdoc s^ba^fe -1 0 excellent ! bmr009bdialogueact1484 251446 251491 b postdoc s^ft -1 0 thank you ! bmr009ddialogueact1485 251501 25158 d phd s^ba -1 0 it 's really great . bmr009adialogueact1486 251879 251986 a grad s -1 0 but it 's hideously slow . bmr009bdialogueact1487 252011 252248 b postdoc s^bu -1 0 so you this is n dan 's patches , dan ellis 's patches . bmr009adialogueact1488 252186 252573 a grad s +1 the what the ones i applied , that you can actually do are dan 's , because it does n't slow it down . bmr009ddialogueact1489 252272 252296 d phd % - -1 0 m bmr009bdialogueact1490 252592 252657 b postdoc s^ba^fe -1 0 fantastic ! bmr009adialogueact1491 252647 252754 a grad s -1 0 just uses a lot of memory . bmr009ddialogueact1492 252763 252969 d phd qy%- -1 0 so when you say slow " , does that mean to grad a: no , the one that 's installed is fine . it 's not slow . i wrote another version . which , instead of having the one pane with the one view , it has multiple panes with the views . but the problem with it is the drawing of those waveforms is so slow that every time you do anything it just crawls . it 's really bad . grad a: as you play , as you move , as you scroll . just about anything , and it was so slow it was not usable . so that 's why i did n't install it and did n't pursue it . postdoc b: and this 'll be a hav having the multiwave will be a big help cuz in terms of like disentangling overlaps and things , that 'll be a big help . grad a: so . that the one dan has is usable enough . it does n't display the others . it displays just the mixed signal . but you can listen to any of them . postdoc b: that 's excellent . he also has version control which is another e so you e the patches that you grad a: , not if we 're going to use tcl - tk at least not if we 're going to use snack . phd d: , i ' m i probably would be trying to use the whatever 's there . and it 's useful to have the grad a: why do n't we see how dan 's works and if it if we really need the display phd d: . i wonder i ' m just wondering if we can display things other than the wave form . so . suppose we have a feature stream . and it 's just , a uni - dimensional feature , varying in time . and we want to plot that , instead of the whole wave form . that might be faster . grad a: we we could do that but that would mean changing the code . this is n't a program we wrote . this is a program that we got from someone else , and we ' ve done patches on . professor c: if there was some is there some way to have someone write patches in something faster and link it in , ? grad a: y yes we could do that . you could you can write widgets in c . and try to do it that way but do n't think it let 's try it with dan 's and if that is n't enough , we can do it otherwise . it is , cuz when i was playing with it , the mixed signal has it all in there . and so it 's really it 's not too bad to find places in the stream where things are happening . so i do n't think it 'll be bad . postdoc b: and it 's also the case that this multi - wave thing is proposed to the so . dan proposed it to the transcriber central people , and it 's likely that so . and and they responded favorably looks as though it will be incorporated in the future version . they said that the only reason they had n't had the multi the parallel stream one before was simply that they had n't had time to do it . and so it 's likely that this may be entered into the ch this central @ . phd d: so . you mean they could do it and it would be fast enough if they do it ? postdoc b: so . this one that we now have does have the status of potentially being incorporated l likely being incorporated into the central code . now , tha now , if we develop further then , y , i do n't grad a: if one of us sat down and coded it , so that it could be displayed fast enough i ' m they would be quite willing to incorporate it . postdoc b: - . like the idea of it being something that 's , tied back into the original , so that other people can benefit from it . however . i also understand that you can have widgets that are very useful for their purpose and that you do n't need to always go that w route . professor c: . let 's do digits , and then we 'll turn off the mikes , and then i have one other thing to discuss . phd d: i actually have to leave . so . i had to leave at three thirty , so , for the digits but i ca n't stay for the discussion phd d: but if there 's something on the rest of the i ' m i 'll be around just have to make call before quarter of . so i or we can talk about it .
the berkely meeting recorder group discussed efforts by speaker mn005 to measure energy levels in cases of speaker overlap in which the time window analyzed was 200 milliseconds or greater. preliminary results were presented showing that log domain analyses did not reveal a significant difference in mean energy levels for windows of overlapping versus non-overlapping speech. in contrast , raw energy analyses were successful in showing the two groups to be distinct. participants discussed alternate strategies for examining energy and the importance of categorizing types of speaker overlap. participants also reviewed the latest iteration of speaker forms , and discussed recent changes to the transcriber tool. continuing efforts by speaker mn005 to measure energy levels in cases of speaker overlap will not include additional log energy analyses , but rather an analysis of raw energy for normalized speaker dataacross windows of varying duration , followed by an examination of pitch- and harmonicity-related features. speakers fe008 and fe016 discussed plans to categorize and produce a taxonomy of types of speaker overlap. time marks for transcribed or force-aligned data are needed to analyze types of speaker overlap. with respect to speaker forms , the group discussed problems associated with categorizing regional dialects of american english. the new version of transcriber does not feature the waveform , as re-drawing of this window is too slow using snack. the latest iteration of speaker forms was presented. new mutitrans patches were added to the transcriber tool to enable it to run with fewer delays , still allowing users to open multiple panes and listen to different channels of the mixed signal.
###dialogue: professor c: there 's another i . it starts with a p . i forget the word for it , but it 's typically when you 're ab r starting around forty for most people , it starts to harden and then it 's just harder for the lens to shift things and th the symptom is typically that you have to hold further away to see it . professor c: , m my brother 's a gerontological psychologist and he came up with an a body age test which gets down to only three measurements that are good enough st statistical predictors of all the rest of it . and one of them is the distance that you have to hold it at . grad a: ok . so . this time the form discussion should be very short , right ? professor c: and she 'll be most interested in that . , she 's probably least involved in the signal - processing so maybe we can just , i do n't think we should go though an elaborate thing , but jose and i were just talking about the { nonvocalsound } , speech e energy thing , professor c: and i we did n't talk about the derivatives . but , the i if you do n't mind my speaking for you for a bit , . right now , that he 's not really showing any distinction , but we discussed a couple of the possible things that he can look at . and one is that this is all in log energy and log energy is compressing the distances between things . another is that he needs to play with the different temporal sizes . he was he was taking everything over two hundred milliseconds , and he 's going to vary that number and also look at moving windows , as we discussed before . and and the other thing is that the doing the subtracting off the mean and the variance in the and dividing it by the standard deviation in the log domain , may not be the right thing to do . phd d: are these the long term means ? like , over the whole , the means of what ? professor c: and so i his he 's making the constraint it has to be at least two hundred milliseconds . and so you take that . and then he 's measuring at the frame level still at the frame level , of what professor c: and then just normalizing with that larger amount . and but one thing he was pointing out is when he looked at a bunch of examples in log domain , it is actually pretty hard to see the change . and you can see that , because of j of just putting it on the board that if you have log - x plus log - x , that 's the log of x plus the log of two phd d: but you could do like a c d f there instead ? , we that the distribution here is normally . professor c: , but also u a good first indicator is when the researcher looks at examples of the data and can not see a change in how big the signal is , when the two speaker professor c: then , that 's a problem right there . so . you should at least be able , doing casual looking and can get the sense , " hey , there 's something there . " and then you can play around with the measures . and when he 's looking in the log domain he 's not really seeing it . so . and when he 's looking in straight energy he is , so that 's a good place to start . so that was the discussion we just had . the other thing actually we ca had a question for adam in this . , when you did the sampling ? over the speech segments or s or sampling over the individual channels in order to do the e the amplitude equalization , did you do it over just the entire everything in the mike channels ? professor c: right , ok . so then that means that someone who did n't speak very much would be largely represented by silence . and someone who would be so the normalization factor probably is i is grad a: , this was quite quick and dirty , and it was just for listening . and for listening it seems to work really . so . professor c: right . so th so there there 's a good chance then given that different people do talk different amounts that there is still a lot more to be gained from gain norm normalization with some sort professor c: if we can figure out a way to do it . but we were that in addition to that there should be s related to pitch and harmonics and . so we did n't talk about the other derivatives , but again just looking at , liz has a very good point , that it would be much more graphic just to show , actually , you do have some distributions here , for these cases . you have some histograms , and , they do n't look very separate . separated . grad a: except that it 's hard to judge this because the they 're not normalized . it 's just number of frames . but , even so . phd d: w , what i meant is , even if you use linear , raw measures , like raw energy or whatever , phd d: maybe we should n't make any assumptions about the distribution 's shape , and just use , use the distribution to model the mean , or what y , rather than the mean take some professor c: but and so in these he 's got that . he 's got some pictures . but he does n't in the he i just in derivatives , but not in the but he d but he does n't phd d: right . so , we what they look like on the , tsk for the raw . professor c: but he did n't h have it for the energy . he had it for the derivatives . professor c: did you have this thing , for just the l r the unnormalized log energy ? ok . so she 's right . professor c: , even before you get the scatter plots , just looking at a single feature , looking at the distribution , is a good thing to do . phd e: catal - combining the different possibilities of the parameters . i the scatter plot combining different n two combination . professor c: , let 's start with the before we get complicated , let 's start with the most basic wh thing , which is we 're arguing that if you take energy if you look at the energy , that , when two people are speaking at the same time , usually there 'll be more energy than when one is right ? that 's that hypothesis . professor c: and the first way you 'd look at that , s she 's , right , is that you would just take a look at the distribution of those two things , much as you ' ve plotted them here , but just do it in this case you have three . you have the silence , and that 's fine . so , with three colors or three shades or whatever , just look at those distributions . and then , given that as a base , you can see if that gets improved , or worsened by the looking at regular energy , looking at log energy , we were just proposing that maybe it 's , it 's harder to see with the log energy , and also these different normalizations , does a particular choice of normalization make it better ? but i had maybe made it too complicated by suggesting early on , that you look at scatter plots because that 's looking at a distribution in two dimensions . let 's start off just in one , with this feature . that 's probably the most basic thing , before anything very complicated . and then we w we 're that pitch - related things are going to be a really likely candidate to help . professor c: but since your intuition from looking at some of the data , is that when you looked at the regular energy , that it did usually go up , when two people were talking , that 's , you should be able to come up with a measure which will match your intuition . professor c: and she 's right , that a that having a having this table , with a whole bunch of things , with the standard deviation , the variance and , it 's harder to interpret than just looking at the same picture you have here . phd e: but it it 's curious but i f i found it in the mixed file , in one channel that in several e several times you have an speaker talking alone with a high level of energy phd e: in the middle a zone of overlapping with mmm less energy and come with another speaker with high energy and the overlapping zone has less energy . professor c: but , the qu so they 'll be this is i w want to point to visual things , but they there 'll be time there 'll be overlap between the distributions , but the question is , " if it 's a reasonable feature , there 's some separation . " grad a: what you would imagine eventually , is that you 'll feed all of these features into some discriminative system . and so even if one of the features does a good job at one type of overlap , another feature might do a good job at another type of overlap . professor c: right . the reason i had suggested the scatter f p features is i used to do this a lot , when we had thirteen or fifteen or twenty features to look at . professor c: because something is a good feature by itself , you do n't really know how it 'll behave in combination and so it 's to have as many together at the same time as possible in some reasonable visual form . there 's graphic things people have had sometimes to put together three or four in some funny way . but it 's true that you should n't do any of that unless that the individual ones , at least , have some hope phd d: , especially for normalizing . , it 's really important to pick a normalization that matches the distribution for that feature . and it may not be the same for all the types of overlaps or the windows may not be the same . e actually , i was wondering , right now you 're taking a all of the speech , from the whole meeting , and you 're trying to find points of overlap , but we do n't really know which speaker is overlapping with which speaker , right ? so another way would just be to take the speech from just , say , morgan , and just jane and then just their overlaps , like but by hand , by cheating , and looking at , if you can detect something that way , because if we ca n't do it that way , there 's no good way that we 're going to be able to do it . phd d: that , there might be something helpful and cleaner about looking at just individuals and then that combination alone . plus , it has more elegant e the m the right model will be easier to see that way . so if i , if you go through and you find adam , cuz he has a lot of overlaps and some other speaker who also has e enough speech and just look at those three cases of adam and the other person and the overlaps , maybe and just look at the distributions , maybe there is a clear pattern but we just ca n't see it because there 's too many combinations of people that can overlap . postdoc b: it 's to start with it 's your idea of simplifying , starting with something that you can see without the extra layers of phd d: cuz if energy does n't matter there , like i do n't think this is true , but what if postdoc b: but just simple case and the one that has the lot of data associated with it . phd d: what if it 's the case that when two people overlap they equate their , there 's a conservation of energy and everybody both people talk more softly ? i do n't think this happens . phd d: there are there are different types , and within those types , like as jose was saying , that sounded like a backchannel overlap , meaning the kind that 's a friendly encouragement , like " - . " , great ! , ! and it does n't take you do n't take the floor . , but , some of those , as you showed , can be discriminated by the duration of the overlap . it actually the s new student , don , who adam has met , and he was at one of our meetings he 's getting his feet wet and then he 'll be starting again in mid - january . he 's interested in trying to distinguish the types of overlap . i if he 's talked with you yet . but in honing in on these different types phd d: so it might be something that we can help by categorizing some of them and then , look at that . professor c: because it would be the quickest thing for him to do . he could you see , he already has all his in place , he has the histogram mechanism , he has the that subtracts out and all he has to do is change it from log to plain energy and plot the histogram and look at it . and then he should go on and do the other bec but but this will phd d: , no . i did n't mean that for you to do that , but i was thinking if don and i are trying to get categories and we label some data for you , and we say this is what we think is going so you do n't have to worry about it . and here 's the three types of overlaps . and we 'll do the labelling for you . phd d: then maybe you can try some different things for those three cases , and see if that helps , or phd e: this is the thing i comment with you before , that we have a great variation of th situation of overlapping . and the behavior for energy is , log energy , is not the same all the time . professor c: but i was just saying that right now from the means that you gave , i do n't have any sense of whether even , there are any significant number of cases for which there is distinct and i would imagine there should be some , there should be the distributions should be somewhat separated . and i would still that if they are not separated , that there 's some there 's most likely something wrong in the way that we 're measuring it . , but , i would n't expect that it was very common overall , that when two people were talking at the same time , that it would that it really was lower , although sometimes , as you say , it would . so . phd d: , no , that was that was a jok or a , a case where you would never know that unless you actually go and look at two individuals . grad a: right . mind if i turned that light off ? the flickering is annoying me . phd d: it might the case , though , that the significant energy , just as jose was saying , comes in the non - backchannel cases . because in back most people when they 're talking do n't change their own energy when they get a backchannel , cuz they 're not really predicting the backchannel . and sometimes it 's a nod and sometimes it 's an " - " . and the " - " is really usually very low energy . so maybe those do n't actually have much difference in energy . but all the other cases might . phd d: and the backchannels are easy to spot s in terms of their words or , just listen to it . phd d: , even if you take the log , you can your model just has a more sensitive measures . grad a: , but tone might be very , you 're " - " tone is going to be very different . grad a: you could imagine doing specialized ones for different types of backchannels , if you could if you had a good model for it . your " - " detector . professor c: if if you 're a i my point is , if you 're doing essentially a linear separation , taking the log first does make it harder to separate . so it 's so , if you i so i if there close to things it does it 's a nonlinear operation that does change the distinction . if you 're doing a non if you 're doing some fancy thing then and right now we 're essentially doing this linear thing by looking across here and saying we 're going to cut it here . and that 's the indicator that we 're getting . but anyway , we 're not disagreeing on any of this , we should look at it more finely , but that this often happens , you do fairly complicated things , and then you stand back from them and you realize that you have n't done something simple . so , if you generated something like that just for the energy and see , and then , a as liz says , when they g have smaller , more coherent groups to look at , that would be another interesting thing later . and then that should give us some indication between those , should give us some indication of whether there 's anything to be achieved f from energy . and then you can move on to the more { nonvocalsound } pitch related . professor c: but then the have you started looking at the pitch related , or ? pitch related ? phd e: i ' m preparing the program but i do n't begin because i saw your email phd e: and i agree with you it 's better to i suppose it 's better to consider the energy this parameter bef professor c: , that 's not what i meant . no , no . i , we certainly should see this but i that the harm i certainly was n't saying this was better than the harmonicity and pitch related things i was just saying phd e: i understood that i had to finish by the moment with the and concentrate my energy in that problem . professor c: ok . but , like , all these derivatives and second derivatives and all these other very fancy things , i would just look at the energy and then get into the harmonicity as a suggestion . professor c: so maybe since w we 're trying to compress the meeting , i know adam had some form he wanted to talk about and did you have some ? postdoc b: i wanted to ask just s something on the end of this top topic . so , when i presented my results about the distribution of overlaps and the speakers and the profiles of the speakers , at the bottom of that i did have a proposal , and i had plan to go through with it , of co coding the types of overlaps that people were involved in s just with reference to speaker style so , with reference and i said that on my in my summary , postdoc b: that so it 's like people may have different amounts of being overlapped with or overlapping postdoc b: but that in itself is not informative without knowing what types of overlaps they 're involved in so i was planning to do a taxonomy of types overlaps with reference to that . postdoc b: so , but it 's like it sounds like you also have something in that direction . phd d: we have nothing , we got his environment set up . he 's he 's a double - e . so . it 's mostly that , if we had to label it ourselves , we would or we 'd have to , to get started , but if it it would be much better if you can do it . you 'd be much better at doing it also because , i ' m not i do n't have a good feel for how they should be sorted out , phd d: and i really did n't wanna go into that if i did n't have to . so if if you 're w willing to do that or grad a: it would be interesting , though , to talk , maybe not at the meeting , but at some other time about what are the classes . phd d: because you can read the literature , but i how it 'll turn out and , it 's always an interesting question . postdoc b: it seems like we also s with reference to a purpose , too , that we 'd want to have them coded . phd d: and we 'd still have some funding for this project , like probably , if we had to hire some like an undergrad , because don is being covered half time on something else , he we 're not paying him the full ra - ship for all the time . so . if we got it to where we wanted we needed someone to do that i do n't think there 's really enough data where postdoc b: , i see this as a prototype , to use the only the already transcribed meeting as just a prototype . phd e: another e m besides the class of overlap , the duration . because is possible some s some classes has a type of a duration , a duration very short when we have overlapping with speech . phd e: is possible to have . and it 's interesting , to consider the window of normalization , normalization window . because if we have a type of , a overlap , backchannel overlap , with a short duration , is possible to normali i that if we normalize with consider only the window by the left ri side on the right side overlapping with a very a small window the if the fit of normalization is mmm bigger in that overlapping zone very short phd e: i me i understand . that you have a backchannel , you have a overlapping zone very short and you consider n all the channel to normalize this very short " mmm - " and the energy is not height if you consider all the channel to normalize and the channel is mmm bigger compared with the overlapping duration , the effect is mmm stronger that the e effect of the normalization with the mean and the variance is different that if you consider only a window compared with the n the duration of overlapping . phd d: it 's a sliding window , so if you take the measure in the center of the overlapped piece , there 'd better be some something . but if your window is really huge then you 're right you wo n't even phd e: , this is the idea , to consider only the small window near the overlapping zone . phd d: the portion of the backchannel wo n't effect anything . but you , you should n't be more than like you should definitely not be three times as big as your backchannel . then you 're gon na w have a wash . and hopefully it 's more like on the order of professor c: , the fact that this gain thing was crude , and the gain wh if someone is speaking relatively at consistent level , just to give a an extreme example , all you 're doing is compensating for that . and then you still s and then if you look at the frame with respect to that , it still should change phd d: , it depends how different your normalization is , as you slide your window across . that 's something we . grad a: , one is your analysis window and then the other is any normalization that you 're doing . phd d: and . but it is definitely true that we need to have the time marks , and i was assuming that will be inherited because , if you have the words and they 're roughly aligned in time via forced alignment or whatever we end up using , then , this student and i would be looking at the time marks postdoc b: good . so , it would n't be i was n't planning to label the time marks . grad a: if it 's not hand - marked then we 're not going to get the times . phd d: , it 's something that w , we would n't be able to do any work without a forced alignment anyway , phd d: so somehow if once he gets going we 're gon na hafta come up with one professor c: again for the close mike , we could come up take a s take the switchboard system , grad a: it 'd be worth a try . it would be interesting to see what we get . phd d: cuz there 's a lot of work you ca n't do without that , how would you you 'd have to go in and measure every start and stop point next to a word phd d: is y if you 're interested in anything to do with words . anyway so that 'd be great . professor c: there 's something we should talk about later but maybe not just now . but , should talk about our options as far as the transcription grad a: nope . one 's a digit form , one 's a speaker form . so one is a one time only speaker form and the other is the digits . grad a: this is just the suggestion for what the new forms would look like . so , they incorporate the changes that we talked about . postdoc b: date and time . why did you switch the order of the date and time fields ? this is rather a low - level , but postdoc b: this is this is rather a low level question , but it used to be date came first . grad a: , because the user fills out the first three fields and i fill out the rest . postdoc b: , how would the how would the user know the time if they did n't know the date ? grad a: it 's an interesting observation , but it was intentional . because the date is when you actually read the digits and the time and , excuse me , the time is when you actually read the digits , but i ' m filling out the date beforehand . if you look at the form in front of you ? that you 're going to fill out when you read the digits ? you 'll see i ' ve already filled in the date but not the time . postdoc b: i always assumed so the time is supposed to be pretty exact , because i ' ve just been taking beginning time of the meeting . grad a: the the reason i put the time in , is so that the person who 's extracting the digits , meaning me , will know where to look in the meeting , to try to find the digits . phd d: so you should call it , like , " digits start time " . or . postdoc b: , i was saying if we started the meeting at two thirty , i 'd put two thirty , and i d e everyone was putting two thirty , postdoc b: and i did n't realize there was " i ' m about to read this and i should " grad a: actually it 's about one third each . about one third of them are blank , about one third of them are when the digits are read , and about one third of them are when the meeting starts . postdoc b: ei - either that or maybe you could maybe write down when people start reading digits on that particular session . grad a: but if i ' m not at the meeting , i ca n't do that . grad a: , but that is the reason name , email and time are where they are . postdoc b: actually you could that does raise another question , which is why is the " professional use only " line not higher ? why does n't it come in at the point of date and seat ? . because we 're filling in other things . postdoc b: , because if y your professional use , you 're gon na already have the date , and the s postdoc b: i ' m comparing the new one with the old one . this is the digit form . grad a: the digit form does n't have a " for official use only " line . it just has a line , which is what you 're supposed to read . grad a: so on the digits form , everything above the line is a fill - in form postdoc b: alright s but i did n't mean to derail our discussion here , so you really wanted to start with this other form . grad a: no , either way is fine you just started talking about something , and i did n't know which form you were referring to . postdoc b: alright , i was comparing so th this is so i was looking at the change first . so it 's like we started with this and now we ' ve got a new version of it wi with reference to this . so the digit form , we had one already . now the f the fields are slightly different . professor c: so the main thing that the person fills out is the name and email and time ? you do the rest ? postdoc b: what and there 's an addition of the native language , which is a bit redundant . this one has native language and this one does too . grad a: that 's because the one , the digit form that has native language is the old form not the new form . postdoc b: ! . , . there we go . , . i 'll catch up here . ok , i see . grad a: this was the problem with these categories , i picked those categories from timit . i what those are . phd d: actually , the only way i know is from working with the database and having to figure it out . phd e: is mean my native language spanish ? the original is the center of spain and the beca grad a: , you could call it whatever you want . for the foreign language we could n't classify every single one . so left it blank and you can put whatever you want . phd e: because is different , the span - the spanish language from the north of spain , of the south , of the west and the but . grad a: so i ' m not what to do about the region field for english variety . , when i wrote i was writing those down , i was thinking , " , these are great if you 're a linguist " . but i how to categorize them . grad a: so i my only question was if you were a south midland speaking region , person ? would it ? is that what you would call yourself ? professor c: , if you 're talking if you 're thinking in terms of places , as opposed to names different peop names people have given to different ways of talking , i would think north midwest , and south midwest would be more common than saying midland , right , i went to s phd d: . now the usage maybe we can give them a li like a little map ? with the regions and they just no , i ' m serious . phd d: there 's no figure . just a little , it does n't have all the detail , but you postdoc b: , i was thinking you could have ma multiple ones and then the amount of time postdoc b: so , roughly . so . you could say , " ten years on the east coast , five years on the west coast " or other . grad a: , we we do n't want to get that level of detail at this form . that 's alright if we want to follow up . but . phd d: i as i said , i do n't think there 's a huge benefit to this region thing . it it gets the problem is that for some things it 's really clear and usually listening to it you can tell right away if it 's a new york or boston accent , but new york and boston are two , i they have the nyc , but new england has a bunch of very different dialects and so does s so do other places . grad a: , so i picked these regions cuz we had talked about timit , and those are right from timit . phd d: and so these would be satisfying like a speech research community if we released the database , but as to whether subjects know where they 're from , i ' m not because i know that they had to fill this out for switchboard . this is i almost exactly the same as switchboard regions phd d: or very close . and i how they filled that out . but th if midland , midland is the one that 's difficult i . phd d: also northwest you ' ve got oreg - washington and oregon now which y people if it 's western or northern . grad a: , i certainly do n't . , i was saying i do n't even i speak . grad a: northwest ? , so this is a real problem . i what to do about it . postdoc b: i would n't know how to characterize mine either . and and so i would think i would say , i ' ve got a mix of california and ohio . grad a: i c at the first level , we speak the same . our dialects or whatever you region are the same . but i what it is . phd d: but maybe that maybe we could leave this and see what people see what people choose and then let them just fill in if they do n't i what else we can do , cuz that 's north midland . postdoc b: i ' m wondering about a question like , " where are you from mostly ? " professor c: but i ' m s i ' m now that you mentioned it though , i am really am confused by " northern " . professor c: , if you 're in new england , that 's north . if you 're i if you 're professor c: maybe we should put a little map and say " put an x on where you 're from " , phd d: we we went around this and then a lot of people ended up saying that it grad a: , i like the idea of asking " what variety of english do you speak " as opposed to where you 're from because th if we start asking where we 're from , again you have to start saying , " , is that the language you speak or is that just where you 're from ? " phd d: it gives us good information on where they 're from , but that does n't tell us anything grad a: , i could try to put squeeze in a little map . there 's not a lot of r of room professor c: i 'd say , " boston , new york city , the south and regular " . grad a: of those , northern is the only one that i do n't even they 're meaning . phd d: so let 's make it up . s , who cares . right ? we can make up our own so we can say " northwest " , " rest of west " . west phd d: that 's , w it 's in it 's it 's harder in america anywhere else , . grad a: some of them are very obvious . if you if you talk to someone speaking with southern drawl , . phd d: and those people , if you ask them to self - identify their accent they know . postdoc b: , we ca , why ca n't we just say characterize something like char characterize your accent postdoc b: but someone from boston with a really strong coloration would know . and so would an r - less maine , phd d: because if you , then , ruling out the fact that you 're inept , if somebody does n't know , it probably means their accent is n't very strong compared to the midwest standard . professor c: , it was n't that long ago that we had somebody here who was from texas who was that he did n't have any accent left . and and had he had a pretty noticeable drawl . postdoc b: . i would say more sweepingly , " how would you characterize your accent ? " grad a: i if i read this form , they 're going to ask it they 're going to answer the same way if you say , " what 's variety of english do you speak ? region . " as if you say " what variety of region do you speak ? characterize your accent ? " they 're going to answer the same way . postdoc b: , i was not that so . i was suggesting not having the options , just having them grad a: , i see . what we talked about with that is so that they would understand the granularity . postdoc b: yes , but if , as liz is suggesting , people who have strong accents know that they do grad a: that 's what i had before , and you told me to list the regions to list them . professor c: last week i was r arguing for having it wide open , but then everybody said " , no , but then it will be hard to interpret because some people will say cincinnati and some will say ohio " . phd d: and would people no , what if we put in both ways of asking them ? so . one is region and the another one is " if you had to characterize yourself your accent , what would you say ? " postdoc b: that 's fine . they might say " other " for region because they what category to use phd d: it just and we might learn from what they say , as to which one 's a better way to ask it . professor c: but it says " variety " and then it gives things that e have american as one of the choices . but then it says " region " , but region actually just applies to , us , postdoc b: at the last meeting , my recollection was that we felt people would have less that there are so many types and varieties of these other languages and we are not going to have that many subjects from these different language groups and that it 's a huge waste of space . grad a: it just said region colon . and and that 's the best way to do it , because of the problems we 're talking about but what we said last week , was no , put in a list , so i put in a list . so should we go back to grad a: , certainly dropping " northern " is right , because none of us that is . phd d: cuz , and keeping " other " , and then maybe this north midland , we call it " north midwest " . south midwest , or just professor c: but there 's a town . in there . i forget what it is @ . phd d: colora , right . and then , the dropping north , so it would be western . it 's just one big shebang , where , you have huge variation in dialects , grad a: , i should n't say that . i have no clue . i was going to say the only one that does n't have a huge variety is new york city . but i have no idea whether it does or not . postdoc b: it does seem . i would think that these categories would be more w would be easier for an analyst to put in rather than the subject himself . professor c: a minute . where does d w where 's where does new york west of new york city and pennsylvania and professor c: pennsylvania . pennsylvania is not new england . and new jersey is not new england and maryland is not new england and none of those are the south . grad a: ok . so . another suggestion . rather than have circle fill in forms , say " region , open paren , e g southern comma western comma close paren colon . " professor c: that 's good . i like that . we 're all sufficiently tired of this that we 're agreeing with you . grad a: ok , we 'll do it that way . actually , i like that a lot . because that get 's at both of the things we were trying to do , the granularity , and the person can just self - assess and we do n't have to argue about what these regions are . postdoc b: that 's right . and it 's easy on the subjects . now i have one suggestion on the next section . so you have native language , you have region , and then you have time spent in english speaking country . now , i wonder if it might be useful to have another open field saying " which one parenthesis s paren closed parenthesis " . cuz if they spent time in britain and america it does n't have to be ex all exact , just in the same open field format that you have . postdoc b: with a with an s which one bmr009bdialogueact1468 249156 249299 b postdoc s -1 0 sss , optional s . bmr009cdialogueact1469 249564 24959 c professor s^bk^rt^tc -1 0 ok . bmr009edialogueact1470 249627 249641 e phd b -1 0 bmr009bdialogueact1471 249714 249723 b postdoc b% -1 0 yeah . bmr009cdialogueact1472 249833 250038 c professor qy^rt^t^tc -1 0 we done ? bmr009adialogueact1473 250075 250102 a grad s^aa -1 0 yep . bmr009bdialogueact1474 250121 250177 b postdoc s^aa -1 0 yeah , that 's good . bmr009cdialogueact1475 250193 250223 c professor s^bk -1 0 ok . bmr009cdialogueact1476 250286 250804 c professor fg|qr^rt -1 0 um s e any any other open mike topics or should we go right to the digits ? bmr009adialogueact1477 25086 251153 a grad fg|qy^rt +1 0 um , did you guys get my email on the multitrans ? bmr009adialogueact1478 251183 251192 a grad % -- -1 0 that bmr009adialogueact1479 251227 251248 a grad s^bk -1 0 ok . bmr009bdialogueact1480 251262 25133 b postdoc qh^ba -1 0 is n't that wonderful ! bmr009bdialogueact1481 25133 251366 b postdoc s^aa -1 0 yeah . bmr009adialogueact1483 251397 251817 a grad fg|s +1 yeah . so . i have a version also which actually displays all the channels . bmr009bdialogueact1482 251395 251442 b postdoc s^ba^fe -1 0 excellent ! bmr009bdialogueact1484 251446 251491 b postdoc s^ft -1 0 thank you ! bmr009ddialogueact1485 251501 25158 d phd s^ba -1 0 it 's really great . bmr009adialogueact1486 251879 251986 a grad s -1 0 but it 's hideously slow . bmr009bdialogueact1487 252011 252248 b postdoc s^bu -1 0 so you this is n dan 's patches , dan ellis 's patches . bmr009adialogueact1488 252186 252573 a grad s +1 the what the ones i applied , that you can actually do are dan 's , because it does n't slow it down . bmr009ddialogueact1489 252272 252296 d phd % - -1 0 m bmr009bdialogueact1490 252592 252657 b postdoc s^ba^fe -1 0 fantastic ! bmr009adialogueact1491 252647 252754 a grad s -1 0 just uses a lot of memory . bmr009ddialogueact1492 252763 252969 d phd qy%- -1 0 so when you say slow " , does that mean to grad a: no , the one that 's installed is fine . it 's not slow . i wrote another version . which , instead of having the one pane with the one view , it has multiple panes with the views . but the problem with it is the drawing of those waveforms is so slow that every time you do anything it just crawls . it 's really bad . grad a: as you play , as you move , as you scroll . just about anything , and it was so slow it was not usable . so that 's why i did n't install it and did n't pursue it . postdoc b: and this 'll be a hav having the multiwave will be a big help cuz in terms of like disentangling overlaps and things , that 'll be a big help . grad a: so . that the one dan has is usable enough . it does n't display the others . it displays just the mixed signal . but you can listen to any of them . postdoc b: that 's excellent . he also has version control which is another e so you e the patches that you grad a: , not if we 're going to use tcl - tk at least not if we 're going to use snack . phd d: , i ' m i probably would be trying to use the whatever 's there . and it 's useful to have the grad a: why do n't we see how dan 's works and if it if we really need the display phd d: . i wonder i ' m just wondering if we can display things other than the wave form . so . suppose we have a feature stream . and it 's just , a uni - dimensional feature , varying in time . and we want to plot that , instead of the whole wave form . that might be faster . grad a: we we could do that but that would mean changing the code . this is n't a program we wrote . this is a program that we got from someone else , and we ' ve done patches on . professor c: if there was some is there some way to have someone write patches in something faster and link it in , ? grad a: y yes we could do that . you could you can write widgets in c . and try to do it that way but do n't think it let 's try it with dan 's and if that is n't enough , we can do it otherwise . it is , cuz when i was playing with it , the mixed signal has it all in there . and so it 's really it 's not too bad to find places in the stream where things are happening . so i do n't think it 'll be bad . postdoc b: and it 's also the case that this multi - wave thing is proposed to the so . dan proposed it to the transcriber central people , and it 's likely that so . and and they responded favorably looks as though it will be incorporated in the future version . they said that the only reason they had n't had the multi the parallel stream one before was simply that they had n't had time to do it . and so it 's likely that this may be entered into the ch this central @ . phd d: so . you mean they could do it and it would be fast enough if they do it ? postdoc b: so . this one that we now have does have the status of potentially being incorporated l likely being incorporated into the central code . now , tha now , if we develop further then , y , i do n't grad a: if one of us sat down and coded it , so that it could be displayed fast enough i ' m they would be quite willing to incorporate it . postdoc b: - . like the idea of it being something that 's , tied back into the original , so that other people can benefit from it . however . i also understand that you can have widgets that are very useful for their purpose and that you do n't need to always go that w route . professor c: . let 's do digits , and then we 'll turn off the mikes , and then i have one other thing to discuss . phd d: i actually have to leave . so . i had to leave at three thirty , so , for the digits but i ca n't stay for the discussion phd d: but if there 's something on the rest of the i ' m i 'll be around just have to make call before quarter of . so i or we can talk about it . ###summary: the berkely meeting recorder group discussed efforts by speaker mn005 to measure energy levels in cases of speaker overlap in which the time window analyzed was 200 milliseconds or greater. preliminary results were presented showing that log domain analyses did not reveal a significant difference in mean energy levels for windows of overlapping versus non-overlapping speech. in contrast , raw energy analyses were successful in showing the two groups to be distinct. participants discussed alternate strategies for examining energy and the importance of categorizing types of speaker overlap. participants also reviewed the latest iteration of speaker forms , and discussed recent changes to the transcriber tool. continuing efforts by speaker mn005 to measure energy levels in cases of speaker overlap will not include additional log energy analyses , but rather an analysis of raw energy for normalized speaker dataacross windows of varying duration , followed by an examination of pitch- and harmonicity-related features. speakers fe008 and fe016 discussed plans to categorize and produce a taxonomy of types of speaker overlap. time marks for transcribed or force-aligned data are needed to analyze types of speaker overlap. with respect to speaker forms , the group discussed problems associated with categorizing regional dialects of american english. the new version of transcriber does not feature the waveform , as re-drawing of this window is too slow using snack. the latest iteration of speaker forms was presented. new mutitrans patches were added to the transcriber tool to enable it to run with fewer delays , still allowing users to open multiple panes and listen to different channels of the mixed signal.
10
grad b: so i this is more or less now just to get you up to date , johno . this is what , grad d: . i . there were , like , the , @ and all that . but . you said you were adding grad b: this is ha ! very . , so we thought that , we can write up , an element , and for each of the situation nodes that we observed in the bayes - net ? so . what 's the situation like at the entity that is mentioned ? if we know anything about it ? is it under construction ? or is it on fire happening to it ? or is it stable ? and , going all the way , f through parking , location , hotel , car , restroom , @ riots , fairs , strikes , or disasters . grad c: so is this is a situation are is all the things which can be happening right now ? or , what is the situation type ? grad b: and , also , this is a what the input is going to be . right ? so , we will , this is a schema . this is grad c: , . if this is th l what the does this is what java bayes takes ? as a bayes - net spec ? grad b: no , because if we 're gon na interface to we 're gon na get an xml document from somewhere . and that xml document will say " we are able to we were able to observe that w the element , @ of the location that the car is near . " so that 's gon na be . grad c: so this is the situational context , everything in it . is that what situation is short for , shi situational context ? grad b: so this is just , again , a an xml schemata which defines a set of possible , permissible xml structures , which we view as input into the bayes - net . grad c: and then we can r possibly run one of them transformations ? that put it into the format that the bayes n or java bayes or whatever wants ? grad c: when you when you say the input to the v java bayes , it takes a certain format , grad b: that 's no problem , but i even think that , once once you have this as running as a module what you want is you wanna say , " ok , give me the posterior probabilities of the go - there node , when this is happening . " when the person said this , the car is there , it 's raining , and this is happening . and with this you can specify the what 's happening in the situation , and what 's happening with the user . so we get after we are done , through the situation we get the user vector . so , this is a grad b: and , all the possible outputs , too . so , we have , , the , go - there decision node which has two elements , going - there and its posterior probability , and not - going - there and its posterior probability , because the output is always gon na be all the decision nodes and all the a all the posterior probabilities for all the values . grad c: and then we would just look at the , struct that we wanna look at in terms of if we 're only asking about one of the so like , if i ' m just interested in the going - there node , i would just pull that information out of the struct that gets return that would that java bayes would output ? grad b: , yes , but it 's a little bit more complex . as , if i understand it correctly , it always gives you all the posterior probabilities for all the values of all decision nodes . so , when we input something , we always get the , posterior probabilities for all of these . so there is no way of telling it t not to tell us about the eva values . grad b: so so we get this whole list of , things , and the question is what to do with it , what to hand on , how to interpret it , in a sense . so y you said if you " i ' m only interested in whether he wants to go there or not " , then look at that node , look which one grad b: look at that struct in the output , even though i would n't call it a " struct " . but . grad c: , i just was abbreviated it to struct in my head , and started going with that . grad c: not a c struct . that 's not what i was trying to k though . grad b: ok . and , the reason is why it 's a little bit more complex or why we can even think about it as an interesting problem in and of itself is so . the , let 's look at an example . grad c: , w would n't we just take the structure that 's outputted and then run another transformation on it , that would just dump the one that we wanted out ? grad b: , exactly . the @ xerxes allows you to say , u " just give me the value of that , and that . " but , we do n't really we 're interested in before we look at the complete at the overall result . so the person said , " where is x ? " and so , we want to know , is does he want info ? o on this ? or know the location ? or does he want to go there ? let 's assume this is our question . nuh ? do this in perl . so we get let 's assume this is the output . we should con be able to conclude from that . it 's always gon na give us a value of how likely we it is that he wants to go there and does n't want to go there , or how likely it is that he wants to get information . but , maybe w we should just reverse this to make it a little bit more delicate . so , does he wanna know where it is ? or does he wanna go there ? grad b: and i if there 's a clear winner here , and this is pretty , indifferent , then we might conclude that he actually wants to just know where , t , he does want to go there . grad c: , out of curiosity , is there a reason why we would n't combine these three nodes ? into one smaller subnet ? that would just be the question for we have " where is x ? " is the question , that would just be info - on or location ? based upon grad b: or go - there . a lot of people ask that , if they actually just wanna go there . people come up to you on campus and say , " where 's the library ? " you 're gon na say y you 're gon na say , g " go down that way . " you 're not gon na say " it 's five hundred yards away from you " or " it 's north of you " , or " it 's located " grad c: , but the there 's so you just have three decisions for the final node , that would link thes these three nodes in the net together . grad b: i whether i understand what you mean . but . again , in this given this input , we , also in some situations , may wanna postulate an opinion whether that person wants to go there now the nicest way , use a cab , or so s wants to know it wants to know where it is because he wants something fixed there , because he wants to visit t it or whatever . so , it n a all i ' m saying is , whatever our input is , we 're always gon na get the full output . and some things will always be too not significant enough . grad c: wha or i or i it 'll be tight . you wo n't it 'll be hard to decide . but , i , this is another , smaller , case of reasoning in the case of an uncertainty , which makes me think bayes - net should be the way to solve these things . so if you had if for every construction , grad c: you could say , " , there here 's the where - is construction . " and for the where - is construction , we know we need to l look at this node , that merges these three things together as for th to decide the response . and since we have a finite number of constructions that we can deal with , we could have a finite number of nodes . say , if we had to y deal with arbitrary language , it would n't make any sense to do that , because there 'd be no way to generate the nodes for every possible sentence . but since we can only deal with a finite amount of grad b: so , the idea is to f to feed the output of that belief - net into another belief - net . grad c: , so take these three things and then put them into another belief - net . grad c: , d for the where - is question . so we 'd have a node for the where - is question . grad b: . but we believe that all the decision nodes are can be relevant for the where - is , and the where how - do - i - get - to or the tell - me - something - about . grad c: as long as y you 're not wearing your h headphones . , i do i see , i if this is a good idea or not . i ' m just throwing it out . but , it seems like we could have i mea or we could put all of the r information that could also be relevant into the where - is node answer node thing and grad b: let 's not forget we 're gon na get some very strong input from these sub dis from these discourse things , right ? tell me the location of x . or " where is x located at ? " grad c: we u , i know , but the bayes - net would be able to the weights on the nodes in the bayes - net would be able to do all that , would n't it ? here 's a k ! , i 'll until you 're plugged in . , do n't sit there . sit here . how you do n't like that one . it 's ok . that 's the weird one . that 's the one that 's painful . that hurts . it hurts so bad . i ' m h i ' m happy that they 're recording that . that headphone . the headphone that you have to put on backwards , with the little thing and the little foam block on it ? it 's a painful , painful microphone . grad c: , here it is . h this thingy . , it 's " the crown " . the crown of pain ! grad c: are you are your mike o is your mike on ? so you ' ve been working with these guys ? what 's going on ? grad a: yes , i have . and , i do . , alright . s so where are we ? grad b: assume we have something coming in . a person says , " where is x ? " , and we get a certain we have a situation vector and a user vector and everything is fine ? an - an and our grad c: did you just sti did you just stick the m the microphone actually in the tea ? grad b: let 's just assume our bayes - net just has three decision nodes for the time being . these three , he wants to know something about it , he wants to know where it is , he wants to go there . grad c: in terms of , these would be wha how we would answer the question where - is , this is i that 's what you s it seemed like , explained it to me earlier grad c: w we we 're we wanna know how to answer the question " where is x ? " grad b: no , i can do the timing node in here , too , and say " ok . " grad c: , but in the s , let 's just deal with the s the simple case of we 're not worrying about timing or anything . we just want to know how we should answer " where is x ? " grad b: ok , and , go - there has two values , right ? , go - there and not - go - there . let 's assume those are the posterior probabilities of that . grad b: info - on has true or false and location . so , he wants to know something about it , and he wants to know something he wants to know where - it - is , grad b: and , in this case we would probably all agree that he wants to go there . our belief - net thinks he wants to go there , in the , whatever , if we have something like this here , and this like that and maybe here also some grad b: then we would , " aha ! he , our belief - net , has s stronger beliefs that he wants to know where it is , than actually wants to go there . " grad b: what do you mean by " differently weighted " ? they do n't feed into anything really anymore . grad a: if we trusted the go - there node more th much more than we trusted the other ones , then we would conclude , even in this situation , that he wanted to go there . grad c: so the but i the k the question that i was as er wondering or maybe robert was proposing to me is how do we d make the decision on as to which one to listen to ? grad a: , so , the final d decision is the combination of these three . so again , it 's some , grad c: ok so , then , the question i so then my question is t to you then , would be so is the only r reason we can make all these smaller bayes - nets , because we know we can only deal with a finite set of constructions ? cuz oth if we 're just taking arbitrary language in , we could n't have a node for every possible question , ? grad c: , i like , in the case of . in the ca any piece of language , we would n't be able to answer it with this system , b if we just h cuz we would n't have the correct node . , w what you 're s proposing is a n where - is node , and and if we and if someone says , , something in mandarin to the system , we 'd - would n't know which node to look at to answer that question , grad b: i do n't see your point . what what i am thinking , or what we 're about to propose here is we 're always gon na get the whole list of values and their posterior probabilities . and now we need an expert system or belief - net that interprets that , that looks the values and says , " the winner is timing . now , go there . " , go there , timing , now . or , " the winner is info - on , function - off . " so , he wants to know something about it , and what it does . , regardless of the input . wh - regardle grad c: , but but how does the expert but how does the expert system know how who which one to declare the winner , if it does n't know the question it is , and how that question should be answered ? grad b: based on the k what the question was , so what the discourse , the ontology , the situation and the user model gave us , we came up with these values for these decisions . grad c: i know . but how do we weight what we get out ? as , which one i which ones are important ? so my i so , if we were to it with a bayes - net , we 'd have to have a node for every question that we knew how to deal with , that would take all of the inputs and weight them appropriately for that question . does that make sense ? yay , nay ? grad a: , are you saying that , what happens if you try to scale this up to the situation , or are we just dealing with arbitrary language ? grad c: , no . i my question is , is the reason that we can make a node f or ok . so , lemme see if i ' m confused . are we going to make a node for every question ? grad a: i do n't not necessarily , i would think . , it 's not based on constructions , it 's based on things like , there 's gon na be a node for go - there or not , and there 's gon na be a node for enter , view , approach . grad c: wel w ok . so , someone asked a question . how do we decide how to answer it ? grad b: , look at look face yourself with this pr question . you get this you 'll have y this is what you get . and now you have to make a decision . what do we think ? what does this tell us ? and not knowing what was asked , and what happened , and whether the person was a tourist or a local , because all of these factors have presumably already gone into making these posterior probabilities . what what we need is a just a mechanism that says , " aha ! there is " grad c: do n't think a " winner - take - all " type of thing is the grad a: , in general , like , we wo n't just have those three , right ? we 'll have , like , many nodes . so we have to , like so that it 's no longer possible to just look at the nodes themselves and figure out what the person is trying to say . grad b: because there are interdependencies , the , no . so if , the go - there posterior possibility is so high , , w if it 's if it has reached a certain height , then all of this becomes irrelevant . so . if even if the function or the history is scoring pretty good on the true node , true value grad c: cuz i , the way you describe what they meant , they were n't mutu , they did n't seem mutually exclusive to me . grad b: , if he does n't want to go there , even if the enter posterior proba go - there is no . enter is high , and info - on is high . grad c: those three nodes . the - d they did n't seem like they were mutually exclusive . grad c: so th s so , but some so , some things would drop out , and some things would still be important . but i what 's confusing me is , if we have a bayes - net to deal w another bayes - net to deal with this , is the only reason ok , so , i , if we have a ba - another bayes - net to deal with this , the only r reason we can design it is cuz we each question is asking ? grad c: and then , so , the only reason way we would question he 's asking is based upon , so if let 's say i had a construction parser , and i plug this in , i would each construction the communicative intent of the construction was and so then i would know how to weight the nodes appropriately , in response . so no matter what they said , if i could map it onto a where - is construction , i could say , " ! grad b: , i ' m also agreeing that a simple pru take the ones where we have a clear winner . forget about the ones where it 's all middle ground . prune those out and just hand over the ones where we have a winner . , because that would be the easiest way . we just compose as an output an xml mes message that says . " go there now . " enter historical information . " and not care whether that 's consistent with anything . but in this case if we say , " definitely he does n't want to go there . he just wants to know where it is . " or let 's call this " look - at - h " he wants to know something about the history of . so he said , " tell me something about the history of that . " now , the e but for some reason the endpoint - approach gets a really high score , too . we ca n't expect this to be at o point three , o point , three , three . somebody needs to zap that . or know there needs to be some knowledge that grad c: we , but , the bayes - net that would merge realized that i had my hand in between my mouth and my micr er , my and my microphone . so then , the bayes - net that would merge there , that would make the decision between go - there , info - on , and location , would have a node to tell you which one of those three you wanted , and based upon that node , then you would look at the other . , it i grad b: . it 's one of those , that 's it 's more like a decision tree , if you want . you first look o at the lowball ones , grad c: , i did n't intend to say that every possible there was a confusion there , k i did n't intend to say every possible thing should go into the bayes - net , because some of the things are n't relevant in the bayes - net for a specific question . like the endpoint is not necessarily relevant in the bayes - net for where - is until after you ' ve decided whether you wanna go there or not . grad a: i the other thing is that , . , when you 're asked a specific question and you do n't even like , if you 're asked a where - is question , you may not even look like , ask for the posterior probability of the , eva node , cuz , that 's what , in the bayes - net you always ask for the posterior probability of a specific node . so , you may not even bother to compute things you do n't need . grad a: you can compute , the posterior probability of one subset of the nodes , given some other nodes , but ignore some other nodes , also . , things you ignore get marginalized over . grad b: , but that 's just shifting the problem . then you would have to make a decision , grad b: ok , if it 's a where - is question , which decision nodes do i query ? grad d: , eventually , you still have to pick out which ones you look at . so it 's the same problem , grad b: , maybe it does make a difference in terms of performance , computational time . so either you always have it compute all the posterior possibilities for all the values for all nodes , and then prune the ones you think that are irrelevant , grad b: or you just make a p @ a priori estimate of what you think might be relevant and query those . grad c: so , you 'd have a decision tree query , go - there . if k if that 's false , query this one . if that 's true , query that one . and just do a binary search through the ? grad c: , in the case of go - there , it would be . in the case cuz if you needed an if y if go - there was true , you 'd wanna endpoint was . and if it was false , you 'd wanna d look at either lo - income info - on or history . grad a: that 's true , i . , so , in a way you would have that . grad c: i ca n't figure out how to get the probabilities into it . like , i 'd look at it 's somewha it 's boggling me . grad c: , . i d think i have n't figured out what the terms in hugin mean , versus what java bayes terms are . grad a: , i , jerry needs to enter marks , but i if he 's gon na do that now or later . but , if he 's gon na enter marks , it 's gon na take him awhile , i , and he wo n't be here . grad a: nancy ? , she was sorta finishing up the , calculation of marks and assigning of grades , but i if she should be here . or , she should be free after that , so assuming she 's coming to this meeting . i if she knows about it . grad b: because , what where we also have decided , prior to this meeting is that we would have a rerun of the three of us sitting together sometime this week again and finish up the , values of this . so we have , believe it or not , we have all the bottom ones here . grad c: , what do the , structures do ? so the , this location node 's got two inputs , grad c: , i see . ok , that was ok . that makes a lot more sense to me now . cuz it was like , that one in stuart 's book about , the grad c: , there 's a dog one , too , but that 's in java bayes , is n't it ? grad b: and we have all the top ones , all the ones to which no arrows are pointing . what we 're missing are the these , where arrows are pointing , where we 're combining top ones . so , we have to come up with values for this , and . and maybe just fiddle around with it a little bit more . and , . and then it 's just , edges , many of edges . and , we wo n't meet next monday . grad b: do you guys , . so . part of what we actually want to do is schedule out what we want to surprise him with when he comes back . , so grad c: that would n't be disappointing . w we should do no work for the two weeks that he 's gone . grad b: , that 's actually what i had planned , personally . i had scheduled out in my mind that you guys do a lot of work , and i do nothing . and then , i grad b: bask in your glory . but , i do you guys have any vacation plans , because i myself am going to be , gone , but this is actually not really important . just this weekend we 're going camping . grad b: but we 're all going to be here on tuesday again ? looks like it ? ok , then . let 's meet again next tuesday . and , finish up this bayes - net . and once we have finished it , i we can , and that 's going to be more just you and me , because bhaskara is doing probabilistic , recursive , structured , object - oriented , grad d: . so you 're saying , next tuesday , is it the whole group meeting , or just us three working on it , or ? grad d: so , when you were saying we need to do a re - run of , like grad c: when you say , " the whole group " , you mean the four of us , and keith ? grad c: ami might be here , and it 's possible that nancy 'll be here ? so , grad c: you 're just gon na have to explain it to me , then , on tuesday , how it 's all gon na work out . grad b: we will . ok . because then , once we have it up and running , then we can start , defining the interfaces and then feed into it and get out of it , and then hook it up to some fake construction parser grad c: that you will have in about nine months or so . the first bad version 'll be done in nine months . grad b: , worry about the ontology interface and you can keith can worry about the discourse . , this is pretty , i hope everybody knows that these are just going to be dummy values , grad b: s so if the endpoint if the go - there is yes and no , then go - there - discourse will just be fifty - fifty . grad a: , what do you mean ? if the go - there says no , then the go - there is grad a: i do n't u understand . like , the go - there depends on all those four things . grad a: , i see . the d see , specifically in our situation , d and o are gon na be , so , whatever . grad d: so , so far we have is that what the keith node is ? ok . and you 're taking it out ? for now ? grad b: get it in here , so th we have the , , sk let 's call it " keith - johno grad c: ok , good . cuz you kn when you said people have the same problem , cuz my h goes after the e the v grad c: i always have to check , every time y i send you an email , a past email of yours , to make i ' m spelling your name correctly . grad b: but , when you abbreviate yourself as the " basman " , you do n't use any h 's . grad a: basman ? , it 's because of the chessplayer named michael basman , who is my hero . grad c: you 're a geek . it 's o k . i how do you pronou how do you pronounce your name ? grad d: i 'd probably still respond to it . i ' ve had people call me eva , grad c: no , not just eva , eva . like if i u take the v and s pronounce it like it was a german v ? grad c: which is why i as long as that 's o k . , i might slip out and say it accidentally . that 's all i ' m saying . grad a: . it does n't matter what those nodes are , anyway , because we 'll just make the weights " zero " for now . grad b: we 'll make them zero for now , because it who knows what they come up with , what 's gon na come in there . then should we start on thursday ? and not meet tomorrow ? i 'll send an email , make a time suggestion . grad c: maybe it 's ok , so that we can that we have one node per construction . cuz even in people , like , they what you 're talking about if you 're using some strange construction . grad c: , but , the , that 's what the construction parser would do . , if you said something completely arbitrary , it would f find the closest construction , but if you said something that was completel er h theoretically the construction parser would do that but if you said something for which there was no construction whatsoever , n people would n't have any idea what you were talking about . like " bus dog fried egg . " . grad c: or , something in mandarin , . or cantonese , as the case may be . what do you think about that , bhaskara ? grad c: , when p how many constructions do people have ? i have not the slightest idea . grad a: is it considered to be like in are they considered to be like very , s abstract things ? grad c: the any any form - meaning pair , to my understanding , is a construction . and form u starts at the level of noun or actually , maybe even sounds . grad c: and then , the c i , maybe there can be the can there be combinations of the dit grad c: it 's probab , i would s definitely say it 's finite . and at least in compilers , that 's all that really matters , as long as your analysis is finite . grad c: . if the if your brain was non - deterministic , then perhaps there 's a way to get , infin an infinite number of constructions that you 'd have to worry about . grad c: right . cuz if we have a fixed number of neurons ? so the best - case scenario would be the number of constructions or , the worst - case scenario is the number of constructions equals the number of neurons . grad c: right . but still finite . no , . not necessarily , is it ? we can end the meeting . ca n't you use different var different levels of activation ? across , lots of different neurons , to specify different values ?
the focus of the meeting was on a presentation of the work done already on the building of the bayes-net. the input layer deriving information from things like the user and situation models , feeds into a set of decision nodes , such as the enter/view/approach ( eva ) endpoint. in any particular situation , most of the outputs will not be relevant to the given context. therefore , they will either have to be pruned a posteriori , or only a subset of the possible decision nodes will be computed in each occasion. the latter option could could follow a binary search-tree approach and it could also be better in computational terms. in any case , on what basis the "winner" output is chosen is not clear. one suggestion was discussed: the particular constructions used can determine the pertinent decision ( output ) nodes. the complete prototype of the bayes-net will be presented in the next meeting. after that , it will be possible to define interfaces and a dummy construction parser , in order to test and link modules together. the suggestion that the most appropriate decision node of the belief-net in each situation could be chosen as a function of what construction was used , was deemed unsuitable at this stage. there are many interdependencies between the output nodes that this approach could not take into account. the rest of the values for the bayes-net nodes will be built in within the week. the finished prototype will be presented during the next meeting. any set of inputs will provide either the whole range of output values of the bayes-net or an a priori selection of those outputs. in both cases , what is needed is a way to single out the appropriate outputs for any given context. for example , in the case of a "where is" question , whether the prevalent output should be "go-there" or "info-on" or even a third option has to be computed somehow. in any case , there are many input values that have not been entered in the bayes-net at this stage. furthermore , no inputs for the ontology and discourse can be built in yet , as they involve research that will be carried out in the future. finally , there has been a problem with adding probabilities in a net created with the hugin software , but this should be overcome very shortly. the presented bayes-net takes inputs from the situation , user , discourse and ontology models. there are several values ( elements ) defined in each of these models. the inputs are fed into the belief-net , which , in turn , outputs the posterior probabilities for the values of all the decision nodes. these comprise "go-there" , "eva" , "info-on" , "location" , "timing" , etc. at this stage , all the decision nodes are evenly weighted: regardless of the context , each output is trusted equally. input and output node structure was presented in xml , as this is the format that will be used for the system. a large number of the value probabilities have already been set.
###dialogue: grad b: so i this is more or less now just to get you up to date , johno . this is what , grad d: . i . there were , like , the , @ and all that . but . you said you were adding grad b: this is ha ! very . , so we thought that , we can write up , an element , and for each of the situation nodes that we observed in the bayes - net ? so . what 's the situation like at the entity that is mentioned ? if we know anything about it ? is it under construction ? or is it on fire happening to it ? or is it stable ? and , going all the way , f through parking , location , hotel , car , restroom , @ riots , fairs , strikes , or disasters . grad c: so is this is a situation are is all the things which can be happening right now ? or , what is the situation type ? grad b: and , also , this is a what the input is going to be . right ? so , we will , this is a schema . this is grad c: , . if this is th l what the does this is what java bayes takes ? as a bayes - net spec ? grad b: no , because if we 're gon na interface to we 're gon na get an xml document from somewhere . and that xml document will say " we are able to we were able to observe that w the element , @ of the location that the car is near . " so that 's gon na be . grad c: so this is the situational context , everything in it . is that what situation is short for , shi situational context ? grad b: so this is just , again , a an xml schemata which defines a set of possible , permissible xml structures , which we view as input into the bayes - net . grad c: and then we can r possibly run one of them transformations ? that put it into the format that the bayes n or java bayes or whatever wants ? grad c: when you when you say the input to the v java bayes , it takes a certain format , grad b: that 's no problem , but i even think that , once once you have this as running as a module what you want is you wanna say , " ok , give me the posterior probabilities of the go - there node , when this is happening . " when the person said this , the car is there , it 's raining , and this is happening . and with this you can specify the what 's happening in the situation , and what 's happening with the user . so we get after we are done , through the situation we get the user vector . so , this is a grad b: and , all the possible outputs , too . so , we have , , the , go - there decision node which has two elements , going - there and its posterior probability , and not - going - there and its posterior probability , because the output is always gon na be all the decision nodes and all the a all the posterior probabilities for all the values . grad c: and then we would just look at the , struct that we wanna look at in terms of if we 're only asking about one of the so like , if i ' m just interested in the going - there node , i would just pull that information out of the struct that gets return that would that java bayes would output ? grad b: , yes , but it 's a little bit more complex . as , if i understand it correctly , it always gives you all the posterior probabilities for all the values of all decision nodes . so , when we input something , we always get the , posterior probabilities for all of these . so there is no way of telling it t not to tell us about the eva values . grad b: so so we get this whole list of , things , and the question is what to do with it , what to hand on , how to interpret it , in a sense . so y you said if you " i ' m only interested in whether he wants to go there or not " , then look at that node , look which one grad b: look at that struct in the output , even though i would n't call it a " struct " . but . grad c: , i just was abbreviated it to struct in my head , and started going with that . grad c: not a c struct . that 's not what i was trying to k though . grad b: ok . and , the reason is why it 's a little bit more complex or why we can even think about it as an interesting problem in and of itself is so . the , let 's look at an example . grad c: , w would n't we just take the structure that 's outputted and then run another transformation on it , that would just dump the one that we wanted out ? grad b: , exactly . the @ xerxes allows you to say , u " just give me the value of that , and that . " but , we do n't really we 're interested in before we look at the complete at the overall result . so the person said , " where is x ? " and so , we want to know , is does he want info ? o on this ? or know the location ? or does he want to go there ? let 's assume this is our question . nuh ? do this in perl . so we get let 's assume this is the output . we should con be able to conclude from that . it 's always gon na give us a value of how likely we it is that he wants to go there and does n't want to go there , or how likely it is that he wants to get information . but , maybe w we should just reverse this to make it a little bit more delicate . so , does he wanna know where it is ? or does he wanna go there ? grad b: and i if there 's a clear winner here , and this is pretty , indifferent , then we might conclude that he actually wants to just know where , t , he does want to go there . grad c: , out of curiosity , is there a reason why we would n't combine these three nodes ? into one smaller subnet ? that would just be the question for we have " where is x ? " is the question , that would just be info - on or location ? based upon grad b: or go - there . a lot of people ask that , if they actually just wanna go there . people come up to you on campus and say , " where 's the library ? " you 're gon na say y you 're gon na say , g " go down that way . " you 're not gon na say " it 's five hundred yards away from you " or " it 's north of you " , or " it 's located " grad c: , but the there 's so you just have three decisions for the final node , that would link thes these three nodes in the net together . grad b: i whether i understand what you mean . but . again , in this given this input , we , also in some situations , may wanna postulate an opinion whether that person wants to go there now the nicest way , use a cab , or so s wants to know it wants to know where it is because he wants something fixed there , because he wants to visit t it or whatever . so , it n a all i ' m saying is , whatever our input is , we 're always gon na get the full output . and some things will always be too not significant enough . grad c: wha or i or i it 'll be tight . you wo n't it 'll be hard to decide . but , i , this is another , smaller , case of reasoning in the case of an uncertainty , which makes me think bayes - net should be the way to solve these things . so if you had if for every construction , grad c: you could say , " , there here 's the where - is construction . " and for the where - is construction , we know we need to l look at this node , that merges these three things together as for th to decide the response . and since we have a finite number of constructions that we can deal with , we could have a finite number of nodes . say , if we had to y deal with arbitrary language , it would n't make any sense to do that , because there 'd be no way to generate the nodes for every possible sentence . but since we can only deal with a finite amount of grad b: so , the idea is to f to feed the output of that belief - net into another belief - net . grad c: , so take these three things and then put them into another belief - net . grad c: , d for the where - is question . so we 'd have a node for the where - is question . grad b: . but we believe that all the decision nodes are can be relevant for the where - is , and the where how - do - i - get - to or the tell - me - something - about . grad c: as long as y you 're not wearing your h headphones . , i do i see , i if this is a good idea or not . i ' m just throwing it out . but , it seems like we could have i mea or we could put all of the r information that could also be relevant into the where - is node answer node thing and grad b: let 's not forget we 're gon na get some very strong input from these sub dis from these discourse things , right ? tell me the location of x . or " where is x located at ? " grad c: we u , i know , but the bayes - net would be able to the weights on the nodes in the bayes - net would be able to do all that , would n't it ? here 's a k ! , i 'll until you 're plugged in . , do n't sit there . sit here . how you do n't like that one . it 's ok . that 's the weird one . that 's the one that 's painful . that hurts . it hurts so bad . i ' m h i ' m happy that they 're recording that . that headphone . the headphone that you have to put on backwards , with the little thing and the little foam block on it ? it 's a painful , painful microphone . grad c: , here it is . h this thingy . , it 's " the crown " . the crown of pain ! grad c: are you are your mike o is your mike on ? so you ' ve been working with these guys ? what 's going on ? grad a: yes , i have . and , i do . , alright . s so where are we ? grad b: assume we have something coming in . a person says , " where is x ? " , and we get a certain we have a situation vector and a user vector and everything is fine ? an - an and our grad c: did you just sti did you just stick the m the microphone actually in the tea ? grad b: let 's just assume our bayes - net just has three decision nodes for the time being . these three , he wants to know something about it , he wants to know where it is , he wants to go there . grad c: in terms of , these would be wha how we would answer the question where - is , this is i that 's what you s it seemed like , explained it to me earlier grad c: w we we 're we wanna know how to answer the question " where is x ? " grad b: no , i can do the timing node in here , too , and say " ok . " grad c: , but in the s , let 's just deal with the s the simple case of we 're not worrying about timing or anything . we just want to know how we should answer " where is x ? " grad b: ok , and , go - there has two values , right ? , go - there and not - go - there . let 's assume those are the posterior probabilities of that . grad b: info - on has true or false and location . so , he wants to know something about it , and he wants to know something he wants to know where - it - is , grad b: and , in this case we would probably all agree that he wants to go there . our belief - net thinks he wants to go there , in the , whatever , if we have something like this here , and this like that and maybe here also some grad b: then we would , " aha ! he , our belief - net , has s stronger beliefs that he wants to know where it is , than actually wants to go there . " grad b: what do you mean by " differently weighted " ? they do n't feed into anything really anymore . grad a: if we trusted the go - there node more th much more than we trusted the other ones , then we would conclude , even in this situation , that he wanted to go there . grad c: so the but i the k the question that i was as er wondering or maybe robert was proposing to me is how do we d make the decision on as to which one to listen to ? grad a: , so , the final d decision is the combination of these three . so again , it 's some , grad c: ok so , then , the question i so then my question is t to you then , would be so is the only r reason we can make all these smaller bayes - nets , because we know we can only deal with a finite set of constructions ? cuz oth if we 're just taking arbitrary language in , we could n't have a node for every possible question , ? grad c: , i like , in the case of . in the ca any piece of language , we would n't be able to answer it with this system , b if we just h cuz we would n't have the correct node . , w what you 're s proposing is a n where - is node , and and if we and if someone says , , something in mandarin to the system , we 'd - would n't know which node to look at to answer that question , grad b: i do n't see your point . what what i am thinking , or what we 're about to propose here is we 're always gon na get the whole list of values and their posterior probabilities . and now we need an expert system or belief - net that interprets that , that looks the values and says , " the winner is timing . now , go there . " , go there , timing , now . or , " the winner is info - on , function - off . " so , he wants to know something about it , and what it does . , regardless of the input . wh - regardle grad c: , but but how does the expert but how does the expert system know how who which one to declare the winner , if it does n't know the question it is , and how that question should be answered ? grad b: based on the k what the question was , so what the discourse , the ontology , the situation and the user model gave us , we came up with these values for these decisions . grad c: i know . but how do we weight what we get out ? as , which one i which ones are important ? so my i so , if we were to it with a bayes - net , we 'd have to have a node for every question that we knew how to deal with , that would take all of the inputs and weight them appropriately for that question . does that make sense ? yay , nay ? grad a: , are you saying that , what happens if you try to scale this up to the situation , or are we just dealing with arbitrary language ? grad c: , no . i my question is , is the reason that we can make a node f or ok . so , lemme see if i ' m confused . are we going to make a node for every question ? grad a: i do n't not necessarily , i would think . , it 's not based on constructions , it 's based on things like , there 's gon na be a node for go - there or not , and there 's gon na be a node for enter , view , approach . grad c: wel w ok . so , someone asked a question . how do we decide how to answer it ? grad b: , look at look face yourself with this pr question . you get this you 'll have y this is what you get . and now you have to make a decision . what do we think ? what does this tell us ? and not knowing what was asked , and what happened , and whether the person was a tourist or a local , because all of these factors have presumably already gone into making these posterior probabilities . what what we need is a just a mechanism that says , " aha ! there is " grad c: do n't think a " winner - take - all " type of thing is the grad a: , in general , like , we wo n't just have those three , right ? we 'll have , like , many nodes . so we have to , like so that it 's no longer possible to just look at the nodes themselves and figure out what the person is trying to say . grad b: because there are interdependencies , the , no . so if , the go - there posterior possibility is so high , , w if it 's if it has reached a certain height , then all of this becomes irrelevant . so . if even if the function or the history is scoring pretty good on the true node , true value grad c: cuz i , the way you describe what they meant , they were n't mutu , they did n't seem mutually exclusive to me . grad b: , if he does n't want to go there , even if the enter posterior proba go - there is no . enter is high , and info - on is high . grad c: those three nodes . the - d they did n't seem like they were mutually exclusive . grad c: so th s so , but some so , some things would drop out , and some things would still be important . but i what 's confusing me is , if we have a bayes - net to deal w another bayes - net to deal with this , is the only reason ok , so , i , if we have a ba - another bayes - net to deal with this , the only r reason we can design it is cuz we each question is asking ? grad c: and then , so , the only reason way we would question he 's asking is based upon , so if let 's say i had a construction parser , and i plug this in , i would each construction the communicative intent of the construction was and so then i would know how to weight the nodes appropriately , in response . so no matter what they said , if i could map it onto a where - is construction , i could say , " ! grad b: , i ' m also agreeing that a simple pru take the ones where we have a clear winner . forget about the ones where it 's all middle ground . prune those out and just hand over the ones where we have a winner . , because that would be the easiest way . we just compose as an output an xml mes message that says . " go there now . " enter historical information . " and not care whether that 's consistent with anything . but in this case if we say , " definitely he does n't want to go there . he just wants to know where it is . " or let 's call this " look - at - h " he wants to know something about the history of . so he said , " tell me something about the history of that . " now , the e but for some reason the endpoint - approach gets a really high score , too . we ca n't expect this to be at o point three , o point , three , three . somebody needs to zap that . or know there needs to be some knowledge that grad c: we , but , the bayes - net that would merge realized that i had my hand in between my mouth and my micr er , my and my microphone . so then , the bayes - net that would merge there , that would make the decision between go - there , info - on , and location , would have a node to tell you which one of those three you wanted , and based upon that node , then you would look at the other . , it i grad b: . it 's one of those , that 's it 's more like a decision tree , if you want . you first look o at the lowball ones , grad c: , i did n't intend to say that every possible there was a confusion there , k i did n't intend to say every possible thing should go into the bayes - net , because some of the things are n't relevant in the bayes - net for a specific question . like the endpoint is not necessarily relevant in the bayes - net for where - is until after you ' ve decided whether you wanna go there or not . grad a: i the other thing is that , . , when you 're asked a specific question and you do n't even like , if you 're asked a where - is question , you may not even look like , ask for the posterior probability of the , eva node , cuz , that 's what , in the bayes - net you always ask for the posterior probability of a specific node . so , you may not even bother to compute things you do n't need . grad a: you can compute , the posterior probability of one subset of the nodes , given some other nodes , but ignore some other nodes , also . , things you ignore get marginalized over . grad b: , but that 's just shifting the problem . then you would have to make a decision , grad b: ok , if it 's a where - is question , which decision nodes do i query ? grad d: , eventually , you still have to pick out which ones you look at . so it 's the same problem , grad b: , maybe it does make a difference in terms of performance , computational time . so either you always have it compute all the posterior possibilities for all the values for all nodes , and then prune the ones you think that are irrelevant , grad b: or you just make a p @ a priori estimate of what you think might be relevant and query those . grad c: so , you 'd have a decision tree query , go - there . if k if that 's false , query this one . if that 's true , query that one . and just do a binary search through the ? grad c: , in the case of go - there , it would be . in the case cuz if you needed an if y if go - there was true , you 'd wanna endpoint was . and if it was false , you 'd wanna d look at either lo - income info - on or history . grad a: that 's true , i . , so , in a way you would have that . grad c: i ca n't figure out how to get the probabilities into it . like , i 'd look at it 's somewha it 's boggling me . grad c: , . i d think i have n't figured out what the terms in hugin mean , versus what java bayes terms are . grad a: , i , jerry needs to enter marks , but i if he 's gon na do that now or later . but , if he 's gon na enter marks , it 's gon na take him awhile , i , and he wo n't be here . grad a: nancy ? , she was sorta finishing up the , calculation of marks and assigning of grades , but i if she should be here . or , she should be free after that , so assuming she 's coming to this meeting . i if she knows about it . grad b: because , what where we also have decided , prior to this meeting is that we would have a rerun of the three of us sitting together sometime this week again and finish up the , values of this . so we have , believe it or not , we have all the bottom ones here . grad c: , what do the , structures do ? so the , this location node 's got two inputs , grad c: , i see . ok , that was ok . that makes a lot more sense to me now . cuz it was like , that one in stuart 's book about , the grad c: , there 's a dog one , too , but that 's in java bayes , is n't it ? grad b: and we have all the top ones , all the ones to which no arrows are pointing . what we 're missing are the these , where arrows are pointing , where we 're combining top ones . so , we have to come up with values for this , and . and maybe just fiddle around with it a little bit more . and , . and then it 's just , edges , many of edges . and , we wo n't meet next monday . grad b: do you guys , . so . part of what we actually want to do is schedule out what we want to surprise him with when he comes back . , so grad c: that would n't be disappointing . w we should do no work for the two weeks that he 's gone . grad b: , that 's actually what i had planned , personally . i had scheduled out in my mind that you guys do a lot of work , and i do nothing . and then , i grad b: bask in your glory . but , i do you guys have any vacation plans , because i myself am going to be , gone , but this is actually not really important . just this weekend we 're going camping . grad b: but we 're all going to be here on tuesday again ? looks like it ? ok , then . let 's meet again next tuesday . and , finish up this bayes - net . and once we have finished it , i we can , and that 's going to be more just you and me , because bhaskara is doing probabilistic , recursive , structured , object - oriented , grad d: . so you 're saying , next tuesday , is it the whole group meeting , or just us three working on it , or ? grad d: so , when you were saying we need to do a re - run of , like grad c: when you say , " the whole group " , you mean the four of us , and keith ? grad c: ami might be here , and it 's possible that nancy 'll be here ? so , grad c: you 're just gon na have to explain it to me , then , on tuesday , how it 's all gon na work out . grad b: we will . ok . because then , once we have it up and running , then we can start , defining the interfaces and then feed into it and get out of it , and then hook it up to some fake construction parser grad c: that you will have in about nine months or so . the first bad version 'll be done in nine months . grad b: , worry about the ontology interface and you can keith can worry about the discourse . , this is pretty , i hope everybody knows that these are just going to be dummy values , grad b: s so if the endpoint if the go - there is yes and no , then go - there - discourse will just be fifty - fifty . grad a: , what do you mean ? if the go - there says no , then the go - there is grad a: i do n't u understand . like , the go - there depends on all those four things . grad a: , i see . the d see , specifically in our situation , d and o are gon na be , so , whatever . grad d: so , so far we have is that what the keith node is ? ok . and you 're taking it out ? for now ? grad b: get it in here , so th we have the , , sk let 's call it " keith - johno grad c: ok , good . cuz you kn when you said people have the same problem , cuz my h goes after the e the v grad c: i always have to check , every time y i send you an email , a past email of yours , to make i ' m spelling your name correctly . grad b: but , when you abbreviate yourself as the " basman " , you do n't use any h 's . grad a: basman ? , it 's because of the chessplayer named michael basman , who is my hero . grad c: you 're a geek . it 's o k . i how do you pronou how do you pronounce your name ? grad d: i 'd probably still respond to it . i ' ve had people call me eva , grad c: no , not just eva , eva . like if i u take the v and s pronounce it like it was a german v ? grad c: which is why i as long as that 's o k . , i might slip out and say it accidentally . that 's all i ' m saying . grad a: . it does n't matter what those nodes are , anyway , because we 'll just make the weights " zero " for now . grad b: we 'll make them zero for now , because it who knows what they come up with , what 's gon na come in there . then should we start on thursday ? and not meet tomorrow ? i 'll send an email , make a time suggestion . grad c: maybe it 's ok , so that we can that we have one node per construction . cuz even in people , like , they what you 're talking about if you 're using some strange construction . grad c: , but , the , that 's what the construction parser would do . , if you said something completely arbitrary , it would f find the closest construction , but if you said something that was completel er h theoretically the construction parser would do that but if you said something for which there was no construction whatsoever , n people would n't have any idea what you were talking about . like " bus dog fried egg . " . grad c: or , something in mandarin , . or cantonese , as the case may be . what do you think about that , bhaskara ? grad c: , when p how many constructions do people have ? i have not the slightest idea . grad a: is it considered to be like in are they considered to be like very , s abstract things ? grad c: the any any form - meaning pair , to my understanding , is a construction . and form u starts at the level of noun or actually , maybe even sounds . grad c: and then , the c i , maybe there can be the can there be combinations of the dit grad c: it 's probab , i would s definitely say it 's finite . and at least in compilers , that 's all that really matters , as long as your analysis is finite . grad c: . if the if your brain was non - deterministic , then perhaps there 's a way to get , infin an infinite number of constructions that you 'd have to worry about . grad c: right . cuz if we have a fixed number of neurons ? so the best - case scenario would be the number of constructions or , the worst - case scenario is the number of constructions equals the number of neurons . grad c: right . but still finite . no , . not necessarily , is it ? we can end the meeting . ca n't you use different var different levels of activation ? across , lots of different neurons , to specify different values ? ###summary: the focus of the meeting was on a presentation of the work done already on the building of the bayes-net. the input layer deriving information from things like the user and situation models , feeds into a set of decision nodes , such as the enter/view/approach ( eva ) endpoint. in any particular situation , most of the outputs will not be relevant to the given context. therefore , they will either have to be pruned a posteriori , or only a subset of the possible decision nodes will be computed in each occasion. the latter option could could follow a binary search-tree approach and it could also be better in computational terms. in any case , on what basis the "winner" output is chosen is not clear. one suggestion was discussed: the particular constructions used can determine the pertinent decision ( output ) nodes. the complete prototype of the bayes-net will be presented in the next meeting. after that , it will be possible to define interfaces and a dummy construction parser , in order to test and link modules together. the suggestion that the most appropriate decision node of the belief-net in each situation could be chosen as a function of what construction was used , was deemed unsuitable at this stage. there are many interdependencies between the output nodes that this approach could not take into account. the rest of the values for the bayes-net nodes will be built in within the week. the finished prototype will be presented during the next meeting. any set of inputs will provide either the whole range of output values of the bayes-net or an a priori selection of those outputs. in both cases , what is needed is a way to single out the appropriate outputs for any given context. for example , in the case of a "where is" question , whether the prevalent output should be "go-there" or "info-on" or even a third option has to be computed somehow. in any case , there are many input values that have not been entered in the bayes-net at this stage. furthermore , no inputs for the ontology and discourse can be built in yet , as they involve research that will be carried out in the future. finally , there has been a problem with adding probabilities in a net created with the hugin software , but this should be overcome very shortly. the presented bayes-net takes inputs from the situation , user , discourse and ontology models. there are several values ( elements ) defined in each of these models. the inputs are fed into the belief-net , which , in turn , outputs the posterior probabilities for the values of all the decision nodes. these comprise "go-there" , "eva" , "info-on" , "location" , "timing" , etc. at this stage , all the decision nodes are evenly weighted: regardless of the context , each output is trusted equally. input and output node structure was presented in xml , as this is the format that will be used for the system. a large number of the value probabilities have already been set.
42
professor b: whatever we say from now on , it can be held against us , right ? professor b: so i the problem is that i actually how th these held meetings are held , if they are very informal and just people are say what 's going on phd e: we just sorta go around and people say what 's going on , what 's the latest professor b: so i that what may be a reasonable is if i first make a report on what 's happening in aurora in general , at least what from my perspective . professor b: which was interesting because it was for the first time we realized we are not friends really , but we are competitors . cuz until then it was like everything was like wonderful phd e: it seemed like there were still some issues , right ? that they were trying to decide ? professor b: and what happened was that they realized that if two leading proposals , which was french telecom alcatel , and us both had voice activity detector . and i said " big surprise , we could have told you that n four months ago , except we did n't because nobody else was bringing it up " . professor b: french telecom did n't volunteer this information either , cuz we were working on mainly on voice activity detector for past several months because that 's buying us the most thing . and everybody said " but this is not fair . we did n't know that . " and the it 's not working on features really . and be i . i said " , you are right , if i wish that you provided better end point at speech because or at least that if we could modify the recognizer , to account for these long silences , because otherwise that th that was n't a correct thing . " and so then ev everybody else says " we should we need to do a new eval evaluation without voice activity detector , or we have to do something about it " . and in principle i we . professor b: we said " . because but in that case , we would like to change the algorithm because if we are working on different data , we probably will use a different set of tricks . but unfortunately nobody ever officially can somehow acknowledge that this can be done , because french telecom was saying " no , now everybody has access to our code , so everybody is going to copy what we did . " our argument was everybody ha has access to our code , and everybody always had access to our code . we never denied that . we thought that people are honest , that if you copy something and if it is protected by patent then you negotiate , , if you find our technique useful , we are very happy . but and french telecom was saying " no , there is a lot of little tricks which can not be protected and you guys will take them , " which probably is also true . , it might be that people will take th the algorithms apart and use the blocks from that . but i somehow think that it would n't be so bad , as long as people are happy abou honest about it . and they have to be honest in the long run , because winning proposal again what will be available th is will be a code . so the people can go to code and say " listen this is what you stole from me " ? so let 's deal with that . so i do n't see the problem . the biggest problem is that f that alcatel french telecom cl claims " we fulfilled the conditions . we are the best . we are the standard . " and e and other people do n't feel that , because they so they now decided that is the whole thing will be done on - endpointed data , essentially that somebody will endpoint the data based on clean speech , because most of this the speechdat - car has the also close speaking mike and endpoints will be provided . and we will run again still not clear if we are going to run the if we are allowed to run new algorithms , but i assume so . because we would fight for that , really . but since u n u at least our experience is that only endpointing a mel cepstrum gets you twenty - one percent improvement overall and twenty - seven improvement on speechdat - car then obvious the database the baseline will go up . and nobody can then achieve fifty percent improvement . so they that there will be a twenty - five percent improvement required on h u m bad mis badly mismatched professor b: but you have the same prob mfcc has an enormous number of insertions . and so , so now they want to say " we will require fifty percent improvement only for matched condition , and only twenty - five percent for the serial cases . " and and they almost on that except that it was n't a hundred percent . and so last time during the meeting , brought up the issue , i said " quite frankly i ' m surprised how lightly you are making these decisions because this is a major decision . for two years we are fighting for fifty percent improvement and suddenly you are saying " no we will do something less " , but maybe we should discuss that . and everybody said " we discussed that and you were not a mee there " and i said " a lot of other people were not there because not everybody participates at these teleconferencing c things . " then they said " no because everybody is invited . " however , there is only ten or fifteen lines , so people ca n't even con participate . so they , and so they said " ok , we will discuss that . " immediately nokia raised the question and they said " we agree this is not good to dissolve the criterion . " so now officially , nokia is complaining and said they are looking for support , qualcomm is saying , too " we should n't abandon the fifty percent yet . we should at least try once again , one more round . " so this is where we are . i hope that this is going to be a adopted . next wednesday we are going to have another teleconferencing call , so we 'll see what where it goes . phd e: so what about the issue of the weights on the for the different systems , the - matched , and medium - mismatched and professor b: , that 's what that 's a g very good point , because david says " we ca we can manipulate this number by choosing the right weights anyways . " so while you are right but if you put a zero weight zero on a mismatched condition , or highly mismatched then you are done . but weights were also deter already decided half a year ago . so professor b: , people will not like it . now what is happening now is that i th that people try to match the criterion to solution . they have solution . now they want to make their criterion is and that this is not the right way . it may be that eventually it may ha it may have to happen . but it 's should happen at a point where everybody feels comfortable that we did all what we could . and i do n't think we did . , that this test was a little bit bogus because of the data and essentially there were these arbitrary decisions made , and everything . so , so this is where it is . so what we are doing at ogi now is working on our parts which we a little bit neglected , like noise separation . so we are looking in ways is in which with which we can provide better initial estimate of the mel spectrum , which would be a l , f more robust to noise , and so far not much success . we tried things which a long time ago bill byrne suggested , instead of using fourier spectrum , from fourier transform , use the spectrum from lpc model . their argument there was the lpc model fits the peaks of the spectrum , so it may be m naturally more robust in noise . and " , that makes sense , " but so far we ca n't get much out of it . we may try some standard techniques like spectral subtraction and professor b: not much . or even i was thinking about looking back into these ad - hoc techniques like dennis klatt was suggesting the one way to deal with noisy speech is to add noise to everything . professor b: so . , add moderate amount of noise to all data . so that makes th any additive noise less addi less a effective , professor b: because you already had the noise in a and it was working at the time . it was like one of these things , but if you think about it , it 's actually pretty ingenious . so , just take a spectrum and add of the constant , c , to every value . professor b: exactly . and if then if this data becomes noisy , it b it becomes eff effectively becomes less noisy . but you can not add too much noise because then you 'll s then you 're clean recognition goes down , but it 's yet to be seen how much , it 's a very simple technique . yes it 's a very simple technique , you just take your spectrum and use whatever is coming from fft , add constant , on onto power spectrum . that that or the other thing is if you have a spectrum , what you can s start doing , you can leave start leaving out the p the parts which are low in energy and then perhaps one could try to find a all - pole model to such a spectrum . because a all - pole model will still try to put the continuation of the model into these parts where the issue set to zero . that 's what we want to try . i have a visitor from brno . he 's a like young faculty . pretty hard - working so he 's looking into that . and then most of the effort is now also aimed at this e trap recognition . this this is this recognition from temporal patterns . professor b: tha this is familiar like because we gave you the name , but , what it is , is that normally what you do is that you recognize speech based on a shortened spectrum . essentially l p - lpc , mel cepstrum , everything starts with a spectral slice . so if you s so , given the spectrogram you essentially are sliding the spectrogram along the f frequency axis and you keep shifting this thing , and you have a spectrogram . so you can say " you can also take the time trajectory of the energy at a given frequency " , and what you get is then , that you get a p vector . and this vector can be a s assigned to s some phoneme . namely you can say i it i will say that this vector will describe the phoneme which is in the center of the vector . and you can try to classify based on that . and you so you classi so it 's a very different vector , very different properties , we much about it , but the truth is professor b: , so you get many decisions . and then you can start dec thinking about how to combine these decisions . exactly , that 's what it is . because if you run this recognition , you get you still get about twenty percent error twenty percent correct . on like for the frame by frame basis , so it 's much better than chance . professor b: that 's another thing . c currently we start always with critical band spectrum . for various reasons . but the latest observation is that you are you can get quite a big advantage of using two critical bands at the same time . professor b: and the reasons there are some reasons for that . because there are some reasons i could talk about , will have to tell you about things like masking experiments which yield critical bands , and also experiments with release of masking , which actually tell you that something is happening across critical bands , across bands . and phd e: how do you convert this energy over time in a particular frequency band into a vector of numbers ? phd e: it 's just the amount of energy in that band from f in that time interval . professor b: yes , yes . and that 's what i ' m saying then , so this is a starting vector . it 's just like shortened f spectrum , . but now we are trying to understand what this vector actually represents , a question is like " how correlated are the elements of this vector ? " turns out they are quite correlated , because , especially the neighboring ones , they they represent the same almost the same configuration of the vocal tract . so there 's a very high correlation . so the classifiers which use the diagonal covariance matrix do n't like it . so we 're thinking about de - correlating them . then the question is " can you describe elements of this vector by gaussian distributions " , or to what extent ? and and so on . so we are learning quite a lot about that . and then another issue is how many vectors we should be using , the so the minimum is one . but is the critical band the right dimension ? so we somehow made arbitrary decision , " yes " . then but then now we are thinking a lot how to use at least the neighboring band because that seems to be happening this i somehow start to believe that 's what 's happening in recognition . cuz a lot of experiments point to the fact that people can split the signal into critical bands , but then so you can you are quite capable of processing a signal in independently in individual critical bands . that 's what masking experiments tell you . but at the same time you most likely pay attention to at least neighboring bands when you are making any decisions , you compare what 's happening in this band to what 's happening to the band to the neighboring bands . and that 's how you make decisions . that 's why the articulatory events , which f fletcher talks about , they are about two critical bands . you need at least two , . you need some relative , relative relation . absolute number does n't tell you the right thing . you need to you need to compare it to something else , what 's happening but it 's what 's happening in the close neighborhood . so if you are making decision what 's happening at one kilohertz , you want to 's happening at nine hundred hertz and it and maybe at eleven hundred hertz , but you do n't much care what 's happening at three kilohertz . phd e: so it 's really w it 's like saying that what 's happening at one kilohertz depends on what 's happening around it . it 's relative to it . professor b: to some extent , it that is also true . but it 's but , th what humans are very much capable of doing is that if th if they are exactly the same thing happening in two neighboring critical bands , recognition can discard it . is what 's happening professor b: and so if you d if you a if you add the noise that normally masks the signal and you can show that in that if the if you add the noise outside the critical band , that does n't affect the decisions you 're making about a signal within a critical band . unless this noise is modulated . if the noise is modulated , with the same modulation frequency as the noise in a critical band , the amount of masking is less . the moment you provide the noise in n neighboring critical bands . professor b: so the s m masking curve , normally it looks like i start from here , so you have no noise then you are expanding the critical band , so the amount of maching is increasing . and when you e hit a certain point , which is a critical band , then the amount of masking is the same . so that 's the famous experiment of fletcher , a long time ago . like that 's where people started thinking " this is interesting ! " so . but , if you modulate the noise , the masking goes up and the moment you start hitting the another critical band , the masking goes down . so essentially that 's a very clear indication that cognition can take into consideration what 's happening in the neighboring bands . but if you go too far in a if you if the noise is very broad , you are not increasing much more , so if you are far away from the signal f the frequency at which the signal is , then the m even the when the noise is co - modulated it 's not helping you much . so things like this we are playing with the hope that perhaps we could eventually u use this in a real recognizer . phd e: but you probably wo n't have anything before the next time we have to evaluate , professor b: probably not . , maybe , most likely we will not have anything which c would comply with the rules . like because professor b: latency currently chops the require significant latency amount of processing , because we any better , yet , than to use the neural net classifiers , and traps . though the work which everybody is looking at now aims at s trying to find out what to do with these vectors , so that a g simple gaussian classifier would be happier with it . or to what extent a gaussian classifier should be unhappy that , and how to gaussian - ize the vectors , so this is what 's happening . then sunil is asked me f for one month 's vacation and since he did not take any vacation for two years , i had no i did n't have heart to tell him no . so he 's in india . professor b: , he may be looking for a girl , for i do n't ask . i know that naran - when last time narayanan did that he came back engaged . phd e: , i ' ve known other friends who they go to ind - they go back home to india for a month , they come back married , professor b: and then what happened with narayanan was that he start pushing me that he needs to get a phd because they would n't give him his wife . and she 's very pretty and he loves her and so we had to really professor b: we had i had a incentive because he always had this plan except he never told me . professor b: figured that that was a that he told me the day when we did very at our nist evaluations of speaker recognition , the technology , and he was involved there . we were after presentation we were driving home and he told me . professor b: so i said " , ok " so he took another three quarter of the year but he was out . so i would n't surprise me if he has a plan like that , though pratibha still needs to get out first . cuz pratibha is there a year earlier . and s and satya needs to get out very first because he 's he already has four years served , though one year he was getting masters . professor b: there , we about evaluation , next meeting is in june . and but like getting get together . professor b: i assume so . yes , but nobody even set up yet the date for delivering endpointed data . and this that . but i , what would be extremely useful , if we can come to our next meeting and say " we did get fifty percent improvement . if if you are interested we eventually can tell you how " , but we can get fifty percent improvement . because people will s will be saying it 's impossible . professor b: u yes . but i assume that it will be similar , i do n't see the reason why it should n't be . professor b: i d i do n't see reason why it should be worse . cuz if it is worse , then we will raise the objection , we say " how come ? " because if we just use our voice activity detector , which we do n't claim even that it 's wonderful , it 's just like one of them . professor b: we get this improvement , how come that we do n't see it on your endpointed data ? phd c: i it could be even better , because the voice activity detector that i choosed is something that cheating , it 's using the alignment of the speech recognition system , phd c: and only the alignment on the clean channel , and then mapped this alignment to the noisy channel . professor b: and on clean speech data . david told me yesterday or harry actually he told harry from qualcomm and harry brought up the suggestion we should still go for fifty percent he says are you aware that your system does only thirty percent comparing to endpointed baselines ? so they must have run already something . and harry said " . but we think that we did n't say the last word yet , that we have other things which we can try . " so . so there 's a lot of discussion now about this new criterion . because nokia was objecting , with qualcomm 's we supported that , we said " yes " . now everybody else is saying " you guys might must be out of your mind . " the guenter hirsch who d does n't speak for ericsson anymore because he is not with ericsson and ericsson may not may withdraw from the whole aurora activity because they have so many troubles now . ericsson 's laying off twenty percent of people . professor b: guenter is already he got the job already was working on it for past two years or three years he got a job at some fachschule , the technical college not too far from aachen . so it 's like professor u university professor , not quite a university , not quite a it 's not aachen university , but it 's a good school and he 's happy . and he , he was hoping to work with ericsson like on t like consulting basis , but right now he says it does n't look like that anybody is even thinking about speech recognition . they think about survival . so . but this is being now discussed right now , and it 's possible that it may get through , that we will still stick to fifty percent . but that means that nobody will probably get this i m this improvement . yet , wi with the current system . which event es essentially that we should be happy with because that would mean that at least people may be forced to look into alternative solutions phd c: but maybe we are not too far from fifty percent , from the new baseline . professor b: is it like is how did you come up with this number ? if you improve twenty by twenty percent the c the f the all baselines , it 's just a quick c comp co computation ? phd e: how 's your documentation or whatever it w what was it you guys were working on last week ? phd d: more or less it 's finished . ma - nec to need a little more time to improve the english , and maybe s to fill in something some small detail , something like that , but it 's more or less ready . phd e: so have you been running some new experiments ? i saw some jobs of yours running on some of the machine phd c: right . we ' ve fff done some strange things like removing c - zero or c - one from the vector of parameters , and we noticed that c - one is almost not useful . you can remove it from the vector , it does n't hurt . phd e: really ? ! that has no effect ? is this in the baseline ? or in professor b: so we were just discussing , since you mentioned that , in it w driving in the car with morgan this morning , we were discussing a good experiment for b for beginning graduate student who wants to run a lot of who wants to get a lot of numbers on something which is , like , " imagine that you will start putting every co any coefficient , which you are using in your vector , in some general power . professor b: general pow power . like you take a s power of two , or take a square root , . so suppose that you are working with a s c - zer c - one . so if you put it in a s square root , that effectively makes your model half as efficient . because your gaussian mixture model , computes the mean . and and i but it 's the mean is an exponent of the whatever , the this gaussian function . professor b: so you 're compressing the range of this coefficient , so it 's becoming less efficient . professor b: right ? morgan was @ and he was saying this might be the alternative way how to play with a fudge factor , in the , just compress the whole vector . and i said " in that case why do n't we just start compressing individual elements , like when because in old days we were doing when people still were doing template matching and euclidean distances , we were doing this liftering of parameters , because we observed that higher parameters were more important than lower for recognition . and the c - ze c - one contributes mainly slope , and it 's highly affected by frequency response of the recording equipment and that thing , so we were coming with all these f various lifters . bell labs had he this r raised cosine lifter which still is built into h htk for reasons n unknown to anybody , but we had exponential lifter , or triangle lifter , basic number of lifters . and . but so they may be a way to fiddle with the f professor b: insertions , deletions , or the giving a relative modifying relative importance of the various parameters . the only problem is that there 's an infinite number of combinations and if the if you s if y professor b: , you need a lot of graduate students , and a lot of computing power . phd e: you need to have a genetic algorithm , that tries random permutations of these things . professor b: if you were at bell labs or i d i should n't be saying this in on a mike , or i ibm , that 's what maybe that 's what somebody would be doing . , the places which have a lot of computing power , so because it is really it 's a p it 's a it 's it will be reasonable search but i wonder if there is n't some way of doing this search like when we are searching say for best discriminants . phd e: actually , i that this would n't be all that bad . you compute the features once , and then these exponents are just applied to that professor b: because essentially you are saying " this feature is not important " . or less important , so that 's th that 's a painful one , phd e: so for each set of exponents that you would try , it would require a training and a recognition ? professor b: but but a minute . you may not need to re retrain the m model . you just may n may need to c give less weight to a mod a component of the model which represents this particular feature . you do n't have to retrain it . phd e: so if you instead of altering the feature vectors themselves , you modify the gaussians in the models . professor b: you just multiply . you modify the gaussian in the model , but in the test data you would have to put it in the power , but in a training what you c in a training in trained model , all you would have to do is to multiply a model by appropriate constant . phd e: but why if you 're multi if you 're altering the model , why w in the test data , why would you have to muck with the cepstral coefficients ? professor b: because in test data you ca do n't have a model . you have only data . but in a tr professor b: that is true , but w , so what you want to do you want to say if obs you if you observe something like stephane observes , that c - one is not important , you can do two things . if you have a trained recognizer , in the model , the component which di dimension wh phd e: all of the all of the mean and variances that correspond to c - one , you put them to zero . professor b: to the s it . but what i ' m proposing now , if it is important but not as important , you multiply it by point one in a model . but but professor b: that you multiply the i would have to look in the math , how does the model phd e: you , you 'd have to modify the standard deviation , so that you make it wider or narrower . professor b: effectively , that 's i that 's what you do . that 's what you do , you modify the standard deviation as it was trained . effectively you , y in f in front of the model , you put a constant . s effectively what you 're doing is you are modifying the deviation . phd e: so by making th the standard deviation narrower , your scores get worse for unless it 's exactly right on the mean . professor b: yes , so you making this particular dimension less important . because see what you are fitting is the multidimensional gaussian , it 's a it has thirty - nine dimensions , or thirteen dimensions if you g ignore deltas and double - deltas . so in order if you in order to make dimension which stephane sees less important , not useful , less important , what you do is that this particular component in the model you can multiply by w you can de - weight it in the model . but you ca n't do it in a test data because you do n't have a model for th when the test comes , but what you can do is that you put this particular component in and you compress it . that becomes th gets less variance , subsequently becomes less important . phd e: could n't you just do that to the test data and not do anything with your training data ? professor b: that would be very bad , because your t your model was trained expecting , that would n't work . because your model was trained expecting a certain var variance on c - one . and because the model thinks c - one is important . after you train the model , you y you could do still what i was proposing initially , that during the training you compress c - one that becomes then it becomes less important in a training . but if you have if you want to run e ex extensive experiment without retraining the model , you do n't have to retrain the model . you train it on the original vector . but after , you wh when you are doing this parametric study of importance of c - one you will de - weight the c - one component in the model , and you will put in the you will compress the this component in a in the test data . s by the same amount . phd e: could you also if you wanted to try an experiment by leaving out say , c - one , could n't you , in your test data , modify the all of the c - one values to be way outside of the normal range of the gaussian for c - one that was trained in the model ? so that effectively , the c - one never really contributes to the score ? professor b: because that would then your model would be unlikely . your likelihood would be low , because you would be providing severe mismatch . phd e: but what if you set if to the mean of the model , then ? and it was a cons you set all c - ones coming in through your test data , you change whatever value that was there to the mean that your model had . professor b: but , no i do n't think that it would be the same . , no , the if you set it to a mean , that would no , you ca n't do that . y you ca ch - chuck , you ca n't do that . professor b: because that would be a really f fiddling with the data , you ca n't do that . professor b: but what you can do , i ' m confident you ca , i ' m reasonably confident and i putting it on the record , y people will listen to it for centuries now , is what you can do , is you train the model with the original data . then you decide that you want to see how important c - one is . so what you will do is that a component in the model for c - one , you will divide it by two . and you will compress your test data by square root . then you will still have a perfect m match . except that this component of c - one will be half as important in a overall score . then you divide it by four and you take a square , f fourth root . then if you think that some component is more important then th it then i it is , based on training , then you multiply this particular component in the model by professor b: , multiply this component i it by number b larger than one , and you put your data in power higher than one . then it becomes more important . in the overall score , i believe . phd c: no . but it 's the the variance is on the denominator in the gaussian equation . so . it 's maybe it 's the contrary . if you want to decrease the importance of a c parameter , you have to increase it 's variance . phd e: but if your if your original data for c - one had a mean of two . and now you 're changing that by squaring it . now your mean of your c - one original data has is four . but your model still has a mean of two . so even though you ' ve expended the range , your mean does n't match anymore . phd c: what i see what could be done is you do n't change your features , which are computed once for all , but you just tune the model . so . you have your features . you train your model on these features . and then if you want to decrease the importance of c - one you just take the variance of the c - one component in the model and increase it if you want to decrease the importance of c - one or decrease it professor b: you would have to modify the mean in the model . i you i agree with you . but , but it 's i it 's do - able , phd e: could increase the variance to decrease the importance . , because if you had a huge variance , you 're dividing by a large number , you get a very small contribution . phd e: , actually , this reminds me of something that happened when i was at bbn . we were playing with putting pitch into the mandarin recognizer . and this particular pitch algorithm when it did n't think there was any voicing , was spitting out zeros . so we were getting when we did clustering , we were getting groups of features phd e: so , when ener when anytime any one of those vectors came in that had a zero in it , we got a great score . it was just , { nonvocalsound } , incredibly { nonvocalsound } high score , and so that was throwing everything off . so if you have very small variance you get really good scores when you get something that matches . so that 's a way , that 's a way to increase the , n that 's interesting . so , that would be that does n't require any retraining . phd e: you you have a step where you modify the models , make a d copy of your models with whatever variance modifications you make , and rerun recognition . and then do a whole bunch of those . that could be set up fairly easily , and you have a whole bunch of grad a: did n't you say you got these htk 's set up on the new linux boxes ? phd e: , and they 're just t right now they 're installing increasing the memory on that the linux box . professor b: because then y you just write the " do " - loops and then you pretend that you are working while you are you c you can go fishing . professor b: then you are in this mode like all of those arpa people are , since it is on the record , i ca n't say which company it was , but it was reported to me that somebody visited a company and during a d during a discussion , there was this guy who was always hitting the carriage returns on a computer . so after two hours the visitor said " wh why are you hitting this carriage return ? " and he said " , we are being paid by a computer ty we are we have a government contract . and they pay us by amount of computer time we use . " it was in old days when there were of pdp - eights and that thing . professor b: because so they had a they literally had to c monitor at the time on a computer how much time is being spent i i or on this particular project . phd e: have you ever seen those little it 's it 's this thing that 's the shape of a bird and it has a red ball and its beak dips into the water ? phd e: so if you could hook that up so it hit the keyboard that 's an interesting experiment . professor b: it would be similar to i knew some people who were that was in old communist czechoslovakia , so we were watching for american airplanes , coming to spy on us at the time , so there were three guys stationed in the middle of the woods on one l lonely watching tower , spending a year and a half there because there was this service and so they very quickly they made friends with local girls and local people in the village professor b: and so but they there was one plane flying over s always above , and so that was the only work which they had . they like four in the afternoon they had to report there was a plane from prague to brno f flying there , so they f very q f first thing was that they would always run back and at four o ' clock and quickly make a call , " this plane is passing " then a second thing was that they took the line from this u post to a local pub . and they were calling from the pub . and they but third thing which they made , and when they screwed up , they finally they had to p the pub owner to make these phone calls because they did n't even bother to be there anymore . and one day there was no plane . at least they were smart enough that they looked if the plane is flying there , and the pub owner says " my four o ' clock , ok , quickly p pick up the phone , call that there 's a plane flying . " there was no plane for some reason , phd e: that would n't be too difficult to try . maybe i could set that up . and we 'll just professor b: , at least go test the s test the assumption about c - one to begin with . but then one can then think about some predictable result to change all of them . it 's just like we used to do these the distance measures . it might be that phd e: , so the first set of variance weighting vectors would be just one modifying one and leaving the others the same . professor b: because you see , what is happening here in a in such a model is that it 's tells you what has a low variance is more reliable , phd e: but there could just naturally be low variance . because i like , i ' ve noticed in the higher cepstral coefficients , the numbers seem to get smaller , so d professor b: that 's why people used these lifters were inverse variance weighting lifters that makes euclidean distance more like mahalanobis distance with a diagonal covariance when you knew what all the variances were over the old data . what they would do is that they would weight each coefficient by inverse of the variance . turns out that the variance decreases at least at fast , i believe , as the index of the cepstral coefficients . you can show that analytically . so typically what happens is that you need to weight the higher coefficients more than the lower coefficients . when we talked about aurora still i wanted to m make a plea encourage for more communication between different parts of the distributed center . even when there is nothing to s to say but the weather is good in ore - in berkeley . i ' m that it 's being appreciated in oregon and maybe it will generate similar responses down here , like , phd e: is the if we mail to " aurora - inhouse " , does that go up to you guys also ? professor b: and then we also can send the dis to the same address right , and it goes to everybody professor b: because what 's happening naturally in research , i know , is that people essentially start working on something and they do n't want to be much bothered , but what the then the danger is in a group like this , is that two people are working on the same thing and i c both of them come with the s very good solution , but it could have been done somehow in half of the effort . , there 's another thing which i wanted to report . lucash , wrote the software for this aurora - two system . reasonably good one , because he 's doing it for intel , but i trust that we have rights to use it or distribute it and everything . cuz intel 's intentions originally was to distribute it free of charge anyways . u s and so we will make that at least you can see the software and if it is of any use . just it might be a reasonable point for p perhaps start converging . because morgan 's point is that he is an experienced guy . he says " it 's very difficult to collaborate if you are working with supposedly the same thing , in quotes , except which is not s is not the same . which which one is using that set of hurdles , another one set is using another set of hurdles . so . and then it 's difficult to c compare . phd c: what about harry ? . we received a mail last week and you are starting to do some experiments . professor b: because intel paid us should i say on a microphone ? some amount of money , not much . not much say on a microphone . much less then we should have gotten for this amount of work . and they wanted to have software so that they can also play with it , which means that it has to be in a certain environment they use actu actually some intel libraries , but in the process , lucash just rewrote the whole thing because he figured rather than trying to f make sense of including icsi software not for training on the nets but he rewrote the or so maybe somehow reused over the parts of the thing so that the whole thing , including mlp , trained mlp is one piece of software . is it useful ? grad a: ye - . , i remember when we were trying to put together all the icsi software for the submission . professor b: or that 's what he was saying , he said that it was like just so many libraries and nobody knew what was used when , and so that 's where he started and that 's where he realized that it needs to be at least cleaned up , and so it this is available . phd c: , the only thing i would check is if he does he use intel math libraries , phd c: because if it 's the case , it 's maybe not so easy to use it on another architecture . professor b: n not maybe maybe not in a first maybe not in a first ap approximation because he started first just with a plain c or c - plus before check on that . and in otherwise the intel libraries , they are available free of f freely . but they may be running only on windows . or on the professor b: that is p that is possible . that 's why intel is distributing it , or that 's phd c: there are at least there are optimized version for their architecture . i never checked carefully these sorts of professor b: i know there was some issues that initially we d do all the development on linux but we use we do n't have we have only three s suns and we have them only because they have a spert board in . otherwise otherwise we t almost exclusively are working with pc 's now , with intel . in that way intel succeeded with us , because they gave us too many good machines for very little money or nothing . so we run everything on intel . phd e: hynek , i if you ' ve ever done this . the way that it works is each person goes around in turn , and you say the transcript number and then you read the digits , the strings of numbers as individual digits . so you do n't say " eight hundred and fifty " , you say " eight five " , and .
the meeting recorder group of icsi at berkeley met without their most senior member , but attending instead was a visitor from research partner ogi. he reported on a recent project meeting from his group's perspective. there was much politics involved , and disagreement between groups. he also brought the icsi members up to date with his group's latest work. the icsi group reported their most recent progress and detailed their recent findings. having discussed this with the icsi project leader , the ogi member told of some future investigation they had devised , which would look at the adjusting the importance of some features. this led to a great deal of discussion. there were also further calls for greater communication between the groups. icsi currently has no mailing list which includes ogi personnel , so they will set one up. there was disagreement at the project meeting over two group development of voice activity detectors , particularly since one group makes their code available to all and the others do not. there were also issues relating to the amount of improvement required if the baseline is improved. ogi are looking at methods of making the initial estimations more robust to noise , though with little success. most of their effort is now on trap recognition from temporal patterns. icsi members have almost finished their report , and have been trying some things out which has led to the conclusion that c-1 channel is not at all useful.
###dialogue: professor b: whatever we say from now on , it can be held against us , right ? professor b: so i the problem is that i actually how th these held meetings are held , if they are very informal and just people are say what 's going on phd e: we just sorta go around and people say what 's going on , what 's the latest professor b: so i that what may be a reasonable is if i first make a report on what 's happening in aurora in general , at least what from my perspective . professor b: which was interesting because it was for the first time we realized we are not friends really , but we are competitors . cuz until then it was like everything was like wonderful phd e: it seemed like there were still some issues , right ? that they were trying to decide ? professor b: and what happened was that they realized that if two leading proposals , which was french telecom alcatel , and us both had voice activity detector . and i said " big surprise , we could have told you that n four months ago , except we did n't because nobody else was bringing it up " . professor b: french telecom did n't volunteer this information either , cuz we were working on mainly on voice activity detector for past several months because that 's buying us the most thing . and everybody said " but this is not fair . we did n't know that . " and the it 's not working on features really . and be i . i said " , you are right , if i wish that you provided better end point at speech because or at least that if we could modify the recognizer , to account for these long silences , because otherwise that th that was n't a correct thing . " and so then ev everybody else says " we should we need to do a new eval evaluation without voice activity detector , or we have to do something about it " . and in principle i we . professor b: we said " . because but in that case , we would like to change the algorithm because if we are working on different data , we probably will use a different set of tricks . but unfortunately nobody ever officially can somehow acknowledge that this can be done , because french telecom was saying " no , now everybody has access to our code , so everybody is going to copy what we did . " our argument was everybody ha has access to our code , and everybody always had access to our code . we never denied that . we thought that people are honest , that if you copy something and if it is protected by patent then you negotiate , , if you find our technique useful , we are very happy . but and french telecom was saying " no , there is a lot of little tricks which can not be protected and you guys will take them , " which probably is also true . , it might be that people will take th the algorithms apart and use the blocks from that . but i somehow think that it would n't be so bad , as long as people are happy abou honest about it . and they have to be honest in the long run , because winning proposal again what will be available th is will be a code . so the people can go to code and say " listen this is what you stole from me " ? so let 's deal with that . so i do n't see the problem . the biggest problem is that f that alcatel french telecom cl claims " we fulfilled the conditions . we are the best . we are the standard . " and e and other people do n't feel that , because they so they now decided that is the whole thing will be done on - endpointed data , essentially that somebody will endpoint the data based on clean speech , because most of this the speechdat - car has the also close speaking mike and endpoints will be provided . and we will run again still not clear if we are going to run the if we are allowed to run new algorithms , but i assume so . because we would fight for that , really . but since u n u at least our experience is that only endpointing a mel cepstrum gets you twenty - one percent improvement overall and twenty - seven improvement on speechdat - car then obvious the database the baseline will go up . and nobody can then achieve fifty percent improvement . so they that there will be a twenty - five percent improvement required on h u m bad mis badly mismatched professor b: but you have the same prob mfcc has an enormous number of insertions . and so , so now they want to say " we will require fifty percent improvement only for matched condition , and only twenty - five percent for the serial cases . " and and they almost on that except that it was n't a hundred percent . and so last time during the meeting , brought up the issue , i said " quite frankly i ' m surprised how lightly you are making these decisions because this is a major decision . for two years we are fighting for fifty percent improvement and suddenly you are saying " no we will do something less " , but maybe we should discuss that . and everybody said " we discussed that and you were not a mee there " and i said " a lot of other people were not there because not everybody participates at these teleconferencing c things . " then they said " no because everybody is invited . " however , there is only ten or fifteen lines , so people ca n't even con participate . so they , and so they said " ok , we will discuss that . " immediately nokia raised the question and they said " we agree this is not good to dissolve the criterion . " so now officially , nokia is complaining and said they are looking for support , qualcomm is saying , too " we should n't abandon the fifty percent yet . we should at least try once again , one more round . " so this is where we are . i hope that this is going to be a adopted . next wednesday we are going to have another teleconferencing call , so we 'll see what where it goes . phd e: so what about the issue of the weights on the for the different systems , the - matched , and medium - mismatched and professor b: , that 's what that 's a g very good point , because david says " we ca we can manipulate this number by choosing the right weights anyways . " so while you are right but if you put a zero weight zero on a mismatched condition , or highly mismatched then you are done . but weights were also deter already decided half a year ago . so professor b: , people will not like it . now what is happening now is that i th that people try to match the criterion to solution . they have solution . now they want to make their criterion is and that this is not the right way . it may be that eventually it may ha it may have to happen . but it 's should happen at a point where everybody feels comfortable that we did all what we could . and i do n't think we did . , that this test was a little bit bogus because of the data and essentially there were these arbitrary decisions made , and everything . so , so this is where it is . so what we are doing at ogi now is working on our parts which we a little bit neglected , like noise separation . so we are looking in ways is in which with which we can provide better initial estimate of the mel spectrum , which would be a l , f more robust to noise , and so far not much success . we tried things which a long time ago bill byrne suggested , instead of using fourier spectrum , from fourier transform , use the spectrum from lpc model . their argument there was the lpc model fits the peaks of the spectrum , so it may be m naturally more robust in noise . and " , that makes sense , " but so far we ca n't get much out of it . we may try some standard techniques like spectral subtraction and professor b: not much . or even i was thinking about looking back into these ad - hoc techniques like dennis klatt was suggesting the one way to deal with noisy speech is to add noise to everything . professor b: so . , add moderate amount of noise to all data . so that makes th any additive noise less addi less a effective , professor b: because you already had the noise in a and it was working at the time . it was like one of these things , but if you think about it , it 's actually pretty ingenious . so , just take a spectrum and add of the constant , c , to every value . professor b: exactly . and if then if this data becomes noisy , it b it becomes eff effectively becomes less noisy . but you can not add too much noise because then you 'll s then you 're clean recognition goes down , but it 's yet to be seen how much , it 's a very simple technique . yes it 's a very simple technique , you just take your spectrum and use whatever is coming from fft , add constant , on onto power spectrum . that that or the other thing is if you have a spectrum , what you can s start doing , you can leave start leaving out the p the parts which are low in energy and then perhaps one could try to find a all - pole model to such a spectrum . because a all - pole model will still try to put the continuation of the model into these parts where the issue set to zero . that 's what we want to try . i have a visitor from brno . he 's a like young faculty . pretty hard - working so he 's looking into that . and then most of the effort is now also aimed at this e trap recognition . this this is this recognition from temporal patterns . professor b: tha this is familiar like because we gave you the name , but , what it is , is that normally what you do is that you recognize speech based on a shortened spectrum . essentially l p - lpc , mel cepstrum , everything starts with a spectral slice . so if you s so , given the spectrogram you essentially are sliding the spectrogram along the f frequency axis and you keep shifting this thing , and you have a spectrogram . so you can say " you can also take the time trajectory of the energy at a given frequency " , and what you get is then , that you get a p vector . and this vector can be a s assigned to s some phoneme . namely you can say i it i will say that this vector will describe the phoneme which is in the center of the vector . and you can try to classify based on that . and you so you classi so it 's a very different vector , very different properties , we much about it , but the truth is professor b: , so you get many decisions . and then you can start dec thinking about how to combine these decisions . exactly , that 's what it is . because if you run this recognition , you get you still get about twenty percent error twenty percent correct . on like for the frame by frame basis , so it 's much better than chance . professor b: that 's another thing . c currently we start always with critical band spectrum . for various reasons . but the latest observation is that you are you can get quite a big advantage of using two critical bands at the same time . professor b: and the reasons there are some reasons for that . because there are some reasons i could talk about , will have to tell you about things like masking experiments which yield critical bands , and also experiments with release of masking , which actually tell you that something is happening across critical bands , across bands . and phd e: how do you convert this energy over time in a particular frequency band into a vector of numbers ? phd e: it 's just the amount of energy in that band from f in that time interval . professor b: yes , yes . and that 's what i ' m saying then , so this is a starting vector . it 's just like shortened f spectrum , . but now we are trying to understand what this vector actually represents , a question is like " how correlated are the elements of this vector ? " turns out they are quite correlated , because , especially the neighboring ones , they they represent the same almost the same configuration of the vocal tract . so there 's a very high correlation . so the classifiers which use the diagonal covariance matrix do n't like it . so we 're thinking about de - correlating them . then the question is " can you describe elements of this vector by gaussian distributions " , or to what extent ? and and so on . so we are learning quite a lot about that . and then another issue is how many vectors we should be using , the so the minimum is one . but is the critical band the right dimension ? so we somehow made arbitrary decision , " yes " . then but then now we are thinking a lot how to use at least the neighboring band because that seems to be happening this i somehow start to believe that 's what 's happening in recognition . cuz a lot of experiments point to the fact that people can split the signal into critical bands , but then so you can you are quite capable of processing a signal in independently in individual critical bands . that 's what masking experiments tell you . but at the same time you most likely pay attention to at least neighboring bands when you are making any decisions , you compare what 's happening in this band to what 's happening to the band to the neighboring bands . and that 's how you make decisions . that 's why the articulatory events , which f fletcher talks about , they are about two critical bands . you need at least two , . you need some relative , relative relation . absolute number does n't tell you the right thing . you need to you need to compare it to something else , what 's happening but it 's what 's happening in the close neighborhood . so if you are making decision what 's happening at one kilohertz , you want to 's happening at nine hundred hertz and it and maybe at eleven hundred hertz , but you do n't much care what 's happening at three kilohertz . phd e: so it 's really w it 's like saying that what 's happening at one kilohertz depends on what 's happening around it . it 's relative to it . professor b: to some extent , it that is also true . but it 's but , th what humans are very much capable of doing is that if th if they are exactly the same thing happening in two neighboring critical bands , recognition can discard it . is what 's happening professor b: and so if you d if you a if you add the noise that normally masks the signal and you can show that in that if the if you add the noise outside the critical band , that does n't affect the decisions you 're making about a signal within a critical band . unless this noise is modulated . if the noise is modulated , with the same modulation frequency as the noise in a critical band , the amount of masking is less . the moment you provide the noise in n neighboring critical bands . professor b: so the s m masking curve , normally it looks like i start from here , so you have no noise then you are expanding the critical band , so the amount of maching is increasing . and when you e hit a certain point , which is a critical band , then the amount of masking is the same . so that 's the famous experiment of fletcher , a long time ago . like that 's where people started thinking " this is interesting ! " so . but , if you modulate the noise , the masking goes up and the moment you start hitting the another critical band , the masking goes down . so essentially that 's a very clear indication that cognition can take into consideration what 's happening in the neighboring bands . but if you go too far in a if you if the noise is very broad , you are not increasing much more , so if you are far away from the signal f the frequency at which the signal is , then the m even the when the noise is co - modulated it 's not helping you much . so things like this we are playing with the hope that perhaps we could eventually u use this in a real recognizer . phd e: but you probably wo n't have anything before the next time we have to evaluate , professor b: probably not . , maybe , most likely we will not have anything which c would comply with the rules . like because professor b: latency currently chops the require significant latency amount of processing , because we any better , yet , than to use the neural net classifiers , and traps . though the work which everybody is looking at now aims at s trying to find out what to do with these vectors , so that a g simple gaussian classifier would be happier with it . or to what extent a gaussian classifier should be unhappy that , and how to gaussian - ize the vectors , so this is what 's happening . then sunil is asked me f for one month 's vacation and since he did not take any vacation for two years , i had no i did n't have heart to tell him no . so he 's in india . professor b: , he may be looking for a girl , for i do n't ask . i know that naran - when last time narayanan did that he came back engaged . phd e: , i ' ve known other friends who they go to ind - they go back home to india for a month , they come back married , professor b: and then what happened with narayanan was that he start pushing me that he needs to get a phd because they would n't give him his wife . and she 's very pretty and he loves her and so we had to really professor b: we had i had a incentive because he always had this plan except he never told me . professor b: figured that that was a that he told me the day when we did very at our nist evaluations of speaker recognition , the technology , and he was involved there . we were after presentation we were driving home and he told me . professor b: so i said " , ok " so he took another three quarter of the year but he was out . so i would n't surprise me if he has a plan like that , though pratibha still needs to get out first . cuz pratibha is there a year earlier . and s and satya needs to get out very first because he 's he already has four years served , though one year he was getting masters . professor b: there , we about evaluation , next meeting is in june . and but like getting get together . professor b: i assume so . yes , but nobody even set up yet the date for delivering endpointed data . and this that . but i , what would be extremely useful , if we can come to our next meeting and say " we did get fifty percent improvement . if if you are interested we eventually can tell you how " , but we can get fifty percent improvement . because people will s will be saying it 's impossible . professor b: u yes . but i assume that it will be similar , i do n't see the reason why it should n't be . professor b: i d i do n't see reason why it should be worse . cuz if it is worse , then we will raise the objection , we say " how come ? " because if we just use our voice activity detector , which we do n't claim even that it 's wonderful , it 's just like one of them . professor b: we get this improvement , how come that we do n't see it on your endpointed data ? phd c: i it could be even better , because the voice activity detector that i choosed is something that cheating , it 's using the alignment of the speech recognition system , phd c: and only the alignment on the clean channel , and then mapped this alignment to the noisy channel . professor b: and on clean speech data . david told me yesterday or harry actually he told harry from qualcomm and harry brought up the suggestion we should still go for fifty percent he says are you aware that your system does only thirty percent comparing to endpointed baselines ? so they must have run already something . and harry said " . but we think that we did n't say the last word yet , that we have other things which we can try . " so . so there 's a lot of discussion now about this new criterion . because nokia was objecting , with qualcomm 's we supported that , we said " yes " . now everybody else is saying " you guys might must be out of your mind . " the guenter hirsch who d does n't speak for ericsson anymore because he is not with ericsson and ericsson may not may withdraw from the whole aurora activity because they have so many troubles now . ericsson 's laying off twenty percent of people . professor b: guenter is already he got the job already was working on it for past two years or three years he got a job at some fachschule , the technical college not too far from aachen . so it 's like professor u university professor , not quite a university , not quite a it 's not aachen university , but it 's a good school and he 's happy . and he , he was hoping to work with ericsson like on t like consulting basis , but right now he says it does n't look like that anybody is even thinking about speech recognition . they think about survival . so . but this is being now discussed right now , and it 's possible that it may get through , that we will still stick to fifty percent . but that means that nobody will probably get this i m this improvement . yet , wi with the current system . which event es essentially that we should be happy with because that would mean that at least people may be forced to look into alternative solutions phd c: but maybe we are not too far from fifty percent , from the new baseline . professor b: is it like is how did you come up with this number ? if you improve twenty by twenty percent the c the f the all baselines , it 's just a quick c comp co computation ? phd e: how 's your documentation or whatever it w what was it you guys were working on last week ? phd d: more or less it 's finished . ma - nec to need a little more time to improve the english , and maybe s to fill in something some small detail , something like that , but it 's more or less ready . phd e: so have you been running some new experiments ? i saw some jobs of yours running on some of the machine phd c: right . we ' ve fff done some strange things like removing c - zero or c - one from the vector of parameters , and we noticed that c - one is almost not useful . you can remove it from the vector , it does n't hurt . phd e: really ? ! that has no effect ? is this in the baseline ? or in professor b: so we were just discussing , since you mentioned that , in it w driving in the car with morgan this morning , we were discussing a good experiment for b for beginning graduate student who wants to run a lot of who wants to get a lot of numbers on something which is , like , " imagine that you will start putting every co any coefficient , which you are using in your vector , in some general power . professor b: general pow power . like you take a s power of two , or take a square root , . so suppose that you are working with a s c - zer c - one . so if you put it in a s square root , that effectively makes your model half as efficient . because your gaussian mixture model , computes the mean . and and i but it 's the mean is an exponent of the whatever , the this gaussian function . professor b: so you 're compressing the range of this coefficient , so it 's becoming less efficient . professor b: right ? morgan was @ and he was saying this might be the alternative way how to play with a fudge factor , in the , just compress the whole vector . and i said " in that case why do n't we just start compressing individual elements , like when because in old days we were doing when people still were doing template matching and euclidean distances , we were doing this liftering of parameters , because we observed that higher parameters were more important than lower for recognition . and the c - ze c - one contributes mainly slope , and it 's highly affected by frequency response of the recording equipment and that thing , so we were coming with all these f various lifters . bell labs had he this r raised cosine lifter which still is built into h htk for reasons n unknown to anybody , but we had exponential lifter , or triangle lifter , basic number of lifters . and . but so they may be a way to fiddle with the f professor b: insertions , deletions , or the giving a relative modifying relative importance of the various parameters . the only problem is that there 's an infinite number of combinations and if the if you s if y professor b: , you need a lot of graduate students , and a lot of computing power . phd e: you need to have a genetic algorithm , that tries random permutations of these things . professor b: if you were at bell labs or i d i should n't be saying this in on a mike , or i ibm , that 's what maybe that 's what somebody would be doing . , the places which have a lot of computing power , so because it is really it 's a p it 's a it 's it will be reasonable search but i wonder if there is n't some way of doing this search like when we are searching say for best discriminants . phd e: actually , i that this would n't be all that bad . you compute the features once , and then these exponents are just applied to that professor b: because essentially you are saying " this feature is not important " . or less important , so that 's th that 's a painful one , phd e: so for each set of exponents that you would try , it would require a training and a recognition ? professor b: but but a minute . you may not need to re retrain the m model . you just may n may need to c give less weight to a mod a component of the model which represents this particular feature . you do n't have to retrain it . phd e: so if you instead of altering the feature vectors themselves , you modify the gaussians in the models . professor b: you just multiply . you modify the gaussian in the model , but in the test data you would have to put it in the power , but in a training what you c in a training in trained model , all you would have to do is to multiply a model by appropriate constant . phd e: but why if you 're multi if you 're altering the model , why w in the test data , why would you have to muck with the cepstral coefficients ? professor b: because in test data you ca do n't have a model . you have only data . but in a tr professor b: that is true , but w , so what you want to do you want to say if obs you if you observe something like stephane observes , that c - one is not important , you can do two things . if you have a trained recognizer , in the model , the component which di dimension wh phd e: all of the all of the mean and variances that correspond to c - one , you put them to zero . professor b: to the s it . but what i ' m proposing now , if it is important but not as important , you multiply it by point one in a model . but but professor b: that you multiply the i would have to look in the math , how does the model phd e: you , you 'd have to modify the standard deviation , so that you make it wider or narrower . professor b: effectively , that 's i that 's what you do . that 's what you do , you modify the standard deviation as it was trained . effectively you , y in f in front of the model , you put a constant . s effectively what you 're doing is you are modifying the deviation . phd e: so by making th the standard deviation narrower , your scores get worse for unless it 's exactly right on the mean . professor b: yes , so you making this particular dimension less important . because see what you are fitting is the multidimensional gaussian , it 's a it has thirty - nine dimensions , or thirteen dimensions if you g ignore deltas and double - deltas . so in order if you in order to make dimension which stephane sees less important , not useful , less important , what you do is that this particular component in the model you can multiply by w you can de - weight it in the model . but you ca n't do it in a test data because you do n't have a model for th when the test comes , but what you can do is that you put this particular component in and you compress it . that becomes th gets less variance , subsequently becomes less important . phd e: could n't you just do that to the test data and not do anything with your training data ? professor b: that would be very bad , because your t your model was trained expecting , that would n't work . because your model was trained expecting a certain var variance on c - one . and because the model thinks c - one is important . after you train the model , you y you could do still what i was proposing initially , that during the training you compress c - one that becomes then it becomes less important in a training . but if you have if you want to run e ex extensive experiment without retraining the model , you do n't have to retrain the model . you train it on the original vector . but after , you wh when you are doing this parametric study of importance of c - one you will de - weight the c - one component in the model , and you will put in the you will compress the this component in a in the test data . s by the same amount . phd e: could you also if you wanted to try an experiment by leaving out say , c - one , could n't you , in your test data , modify the all of the c - one values to be way outside of the normal range of the gaussian for c - one that was trained in the model ? so that effectively , the c - one never really contributes to the score ? professor b: because that would then your model would be unlikely . your likelihood would be low , because you would be providing severe mismatch . phd e: but what if you set if to the mean of the model , then ? and it was a cons you set all c - ones coming in through your test data , you change whatever value that was there to the mean that your model had . professor b: but , no i do n't think that it would be the same . , no , the if you set it to a mean , that would no , you ca n't do that . y you ca ch - chuck , you ca n't do that . professor b: because that would be a really f fiddling with the data , you ca n't do that . professor b: but what you can do , i ' m confident you ca , i ' m reasonably confident and i putting it on the record , y people will listen to it for centuries now , is what you can do , is you train the model with the original data . then you decide that you want to see how important c - one is . so what you will do is that a component in the model for c - one , you will divide it by two . and you will compress your test data by square root . then you will still have a perfect m match . except that this component of c - one will be half as important in a overall score . then you divide it by four and you take a square , f fourth root . then if you think that some component is more important then th it then i it is , based on training , then you multiply this particular component in the model by professor b: , multiply this component i it by number b larger than one , and you put your data in power higher than one . then it becomes more important . in the overall score , i believe . phd c: no . but it 's the the variance is on the denominator in the gaussian equation . so . it 's maybe it 's the contrary . if you want to decrease the importance of a c parameter , you have to increase it 's variance . phd e: but if your if your original data for c - one had a mean of two . and now you 're changing that by squaring it . now your mean of your c - one original data has is four . but your model still has a mean of two . so even though you ' ve expended the range , your mean does n't match anymore . phd c: what i see what could be done is you do n't change your features , which are computed once for all , but you just tune the model . so . you have your features . you train your model on these features . and then if you want to decrease the importance of c - one you just take the variance of the c - one component in the model and increase it if you want to decrease the importance of c - one or decrease it professor b: you would have to modify the mean in the model . i you i agree with you . but , but it 's i it 's do - able , phd e: could increase the variance to decrease the importance . , because if you had a huge variance , you 're dividing by a large number , you get a very small contribution . phd e: , actually , this reminds me of something that happened when i was at bbn . we were playing with putting pitch into the mandarin recognizer . and this particular pitch algorithm when it did n't think there was any voicing , was spitting out zeros . so we were getting when we did clustering , we were getting groups of features phd e: so , when ener when anytime any one of those vectors came in that had a zero in it , we got a great score . it was just , { nonvocalsound } , incredibly { nonvocalsound } high score , and so that was throwing everything off . so if you have very small variance you get really good scores when you get something that matches . so that 's a way , that 's a way to increase the , n that 's interesting . so , that would be that does n't require any retraining . phd e: you you have a step where you modify the models , make a d copy of your models with whatever variance modifications you make , and rerun recognition . and then do a whole bunch of those . that could be set up fairly easily , and you have a whole bunch of grad a: did n't you say you got these htk 's set up on the new linux boxes ? phd e: , and they 're just t right now they 're installing increasing the memory on that the linux box . professor b: because then y you just write the " do " - loops and then you pretend that you are working while you are you c you can go fishing . professor b: then you are in this mode like all of those arpa people are , since it is on the record , i ca n't say which company it was , but it was reported to me that somebody visited a company and during a d during a discussion , there was this guy who was always hitting the carriage returns on a computer . so after two hours the visitor said " wh why are you hitting this carriage return ? " and he said " , we are being paid by a computer ty we are we have a government contract . and they pay us by amount of computer time we use . " it was in old days when there were of pdp - eights and that thing . professor b: because so they had a they literally had to c monitor at the time on a computer how much time is being spent i i or on this particular project . phd e: have you ever seen those little it 's it 's this thing that 's the shape of a bird and it has a red ball and its beak dips into the water ? phd e: so if you could hook that up so it hit the keyboard that 's an interesting experiment . professor b: it would be similar to i knew some people who were that was in old communist czechoslovakia , so we were watching for american airplanes , coming to spy on us at the time , so there were three guys stationed in the middle of the woods on one l lonely watching tower , spending a year and a half there because there was this service and so they very quickly they made friends with local girls and local people in the village professor b: and so but they there was one plane flying over s always above , and so that was the only work which they had . they like four in the afternoon they had to report there was a plane from prague to brno f flying there , so they f very q f first thing was that they would always run back and at four o ' clock and quickly make a call , " this plane is passing " then a second thing was that they took the line from this u post to a local pub . and they were calling from the pub . and they but third thing which they made , and when they screwed up , they finally they had to p the pub owner to make these phone calls because they did n't even bother to be there anymore . and one day there was no plane . at least they were smart enough that they looked if the plane is flying there , and the pub owner says " my four o ' clock , ok , quickly p pick up the phone , call that there 's a plane flying . " there was no plane for some reason , phd e: that would n't be too difficult to try . maybe i could set that up . and we 'll just professor b: , at least go test the s test the assumption about c - one to begin with . but then one can then think about some predictable result to change all of them . it 's just like we used to do these the distance measures . it might be that phd e: , so the first set of variance weighting vectors would be just one modifying one and leaving the others the same . professor b: because you see , what is happening here in a in such a model is that it 's tells you what has a low variance is more reliable , phd e: but there could just naturally be low variance . because i like , i ' ve noticed in the higher cepstral coefficients , the numbers seem to get smaller , so d professor b: that 's why people used these lifters were inverse variance weighting lifters that makes euclidean distance more like mahalanobis distance with a diagonal covariance when you knew what all the variances were over the old data . what they would do is that they would weight each coefficient by inverse of the variance . turns out that the variance decreases at least at fast , i believe , as the index of the cepstral coefficients . you can show that analytically . so typically what happens is that you need to weight the higher coefficients more than the lower coefficients . when we talked about aurora still i wanted to m make a plea encourage for more communication between different parts of the distributed center . even when there is nothing to s to say but the weather is good in ore - in berkeley . i ' m that it 's being appreciated in oregon and maybe it will generate similar responses down here , like , phd e: is the if we mail to " aurora - inhouse " , does that go up to you guys also ? professor b: and then we also can send the dis to the same address right , and it goes to everybody professor b: because what 's happening naturally in research , i know , is that people essentially start working on something and they do n't want to be much bothered , but what the then the danger is in a group like this , is that two people are working on the same thing and i c both of them come with the s very good solution , but it could have been done somehow in half of the effort . , there 's another thing which i wanted to report . lucash , wrote the software for this aurora - two system . reasonably good one , because he 's doing it for intel , but i trust that we have rights to use it or distribute it and everything . cuz intel 's intentions originally was to distribute it free of charge anyways . u s and so we will make that at least you can see the software and if it is of any use . just it might be a reasonable point for p perhaps start converging . because morgan 's point is that he is an experienced guy . he says " it 's very difficult to collaborate if you are working with supposedly the same thing , in quotes , except which is not s is not the same . which which one is using that set of hurdles , another one set is using another set of hurdles . so . and then it 's difficult to c compare . phd c: what about harry ? . we received a mail last week and you are starting to do some experiments . professor b: because intel paid us should i say on a microphone ? some amount of money , not much . not much say on a microphone . much less then we should have gotten for this amount of work . and they wanted to have software so that they can also play with it , which means that it has to be in a certain environment they use actu actually some intel libraries , but in the process , lucash just rewrote the whole thing because he figured rather than trying to f make sense of including icsi software not for training on the nets but he rewrote the or so maybe somehow reused over the parts of the thing so that the whole thing , including mlp , trained mlp is one piece of software . is it useful ? grad a: ye - . , i remember when we were trying to put together all the icsi software for the submission . professor b: or that 's what he was saying , he said that it was like just so many libraries and nobody knew what was used when , and so that 's where he started and that 's where he realized that it needs to be at least cleaned up , and so it this is available . phd c: , the only thing i would check is if he does he use intel math libraries , phd c: because if it 's the case , it 's maybe not so easy to use it on another architecture . professor b: n not maybe maybe not in a first maybe not in a first ap approximation because he started first just with a plain c or c - plus before check on that . and in otherwise the intel libraries , they are available free of f freely . but they may be running only on windows . or on the professor b: that is p that is possible . that 's why intel is distributing it , or that 's phd c: there are at least there are optimized version for their architecture . i never checked carefully these sorts of professor b: i know there was some issues that initially we d do all the development on linux but we use we do n't have we have only three s suns and we have them only because they have a spert board in . otherwise otherwise we t almost exclusively are working with pc 's now , with intel . in that way intel succeeded with us , because they gave us too many good machines for very little money or nothing . so we run everything on intel . phd e: hynek , i if you ' ve ever done this . the way that it works is each person goes around in turn , and you say the transcript number and then you read the digits , the strings of numbers as individual digits . so you do n't say " eight hundred and fifty " , you say " eight five " , and . ###summary: the meeting recorder group of icsi at berkeley met without their most senior member , but attending instead was a visitor from research partner ogi. he reported on a recent project meeting from his group's perspective. there was much politics involved , and disagreement between groups. he also brought the icsi members up to date with his group's latest work. the icsi group reported their most recent progress and detailed their recent findings. having discussed this with the icsi project leader , the ogi member told of some future investigation they had devised , which would look at the adjusting the importance of some features. this led to a great deal of discussion. there were also further calls for greater communication between the groups. icsi currently has no mailing list which includes ogi personnel , so they will set one up. there was disagreement at the project meeting over two group development of voice activity detectors , particularly since one group makes their code available to all and the others do not. there were also issues relating to the amount of improvement required if the baseline is improved. ogi are looking at methods of making the initial estimations more robust to noise , though with little success. most of their effort is now on trap recognition from temporal patterns. icsi members have almost finished their report , and have been trying some things out which has led to the conclusion that c-1 channel is not at all useful.
4
grad e: it depends on if the temp files are there or not , that at least that 's my current working hypothesis , that what happens is it tries to clear the temp files and if they 're too big , it crashes . phd b: when the power went out the other day and i restarted it , it crashed the first time . grad e: it 's i they 're called temp files , but they 're not actually in the temp directory they 're in the scratch , they 're not backed up , but they 're not erased either on power failure . phd d: but that 's usually the meeting that i recorded , and it neve it does n't crash on me . phd b: this was n't actually , this was n't a before your meeting , this was , tuesday afternoon when , , robert just wanted to do a little recording , professor c: i when would be a good excuse for it , but ca n't to be giving a talk t and use the example from last week with everybody t doing the digits at once . phd b: talk about a good noise shield . ? you wanted to pe keep people from listening in , you could like have that playing outside the room . nobody could listen in . grad e: i since i ' ve been gone all week , i did n't send out a reminder for an agenda , grad e: do we have anything to talk about or should we just read digits and go ? phd d: so , are we do we have like an agenda or anything that we should be phd f: so so the deal is that , , be available after , like ten thirty . i how s how early you wanted to grad e: eurospeech is due on friday and then i ' m going down to san , san jose friday night , so , if we start and late saturday that 's a good thing . professor c: , and they 'll end up here . so b and also brian kingsbury is actually flying from , the east coast on that morning . professor c: so , i i will be , he 's taking a very early flight and we do have the time work difference running the right way , but i still think that there 's no way we could start before eleven . it might end up really being twelve . so when we get closer we 'll find people 's plane schedules , and let everybody know . , so . that 's good . grad e: but , maybe an agenda , or at least some things to talk about would be a good idea . professor c: we can start gathering those ideas , but then we should firm it up by next thursday 's meeting . postdoc a: will we have time to , to prepare something that we in the format we were planning for the ibm transcribers by then , or ? phd b: yes , he 's i ' m , i should have forwarded that along . , i mentioned at the last meeting , he said that , he talked to them and it was fine with the beeps they would be that 's easy for them to do . grad e: great . ok . so , , though thi - thilo is n't here , but , i have the program to insert the beeps . what i do n't have is something to parse the output of the channelized transcripts to find out where to put the beeps , but that should be really easy to do . so do we have a meeting that 's been done with , grad e: that we ' ve tightened it up to the point where we can actually give it to ibm and have them try it out ? postdoc a: he generated , a channel - wise presegmented version of a meeting , but it was robustness rather than edu so i depends on whether we 're willing to use robustness ? grad e: we had talked about doing maybe edu as a good choice , though . , whatever we have . phd b: we ' ve talked about that as being the next ones we wanted to transcribe . grad e: hand - checked ? cuz that was one of the processes we were talking about as . postdoc a: that 's right . , we have n't done that . i could set someone on that tomorrow . phd b: and we probably do n't have to do necessarily a whole meeting for that if we just wanna send them a sample to try . phd b: i , maybe we can figure out how long it 'll take @ to do . grad e: i , it seems to me w we probably should go ahead and do a whole meeting because we 'll have to transcribe the whole meeting anyway sometime . professor c: yes except that if they had if there was a choice between having fifteen minutes that was fully the way you wanted it , and having a whole meeting that did n't get at what you wanted for them it 's just dependent of how much phd b: i , the only thing i ' m not about is , how quickly can the transcribers scan over and fix the boundaries , grad e: it 's gon na be one or two times real time at , excuse me , two or more times real time , right ? cuz they have to at least listen to it . professor c: can we pipeline it so that say there 's , the transcriber gets done with a quarter of the meeting and then we you run it through this other ? grad e: the other is i b i ' m just thinking that from a data keeping - track - of - the - data point of view , it may be best to send them whole meetings at a time and not try to send them bits and pieces . professor c: ok , so . , that 's right . so the first thing is the automatic thing , and then it 's the transcribers tightening up , and then it 's ibm . professor c: ok , so you might as ha run the automatic thing over the entire meeting , and then , you would give ibm whatever was fixed . professor c: , but start from the beginning and go to the end , right ? so if they were only half way through then that 's what you 'd give ibm . right ? phd b: as of what point ? the i the question on my mind is do we for the transcribers to adjust the marks for the whole meeting before we give anything to ibm , or do we go ahead and send them a sample ? let their professor c: why would n't we s @ w i if they were going sequentially through it , why would n't we give them i are we trying to get something done by the time brian comes ? professor c: so if we were , then it seems like giving them something , whatever they had gotten up to , would be better than nothing . grad e: , i do n't think , h they typically work for what , four hours , something like that ? grad e: the they should be able to get through a whole meeting in one sitting . i would think , unless it 's a lot harder than we think it is , which it could be , certainly . phd b: so it 's gon na be , depending on the number of people in the meeting , postdoc a: i there is this issue of , if the segmenter thought there was no speech on a particular stretch , on a particular channel , and there really was , then , if it did n't show up in a mixed signal to verify , then it might be overlooked , so , the question is " should a transcriber listen to the entire thing or can it g can it be based on the mixed signal ? " and i th so far as i ' m concerned it 's fine to base it on the mixed signal at this point , and grad e: that 's what it seems to me too , in that if they need to , just like in the other cases , they can listen to the individual , if they need to . grad e: so , they have the normal channeltrans interface where they have each individual speaker has their own line , but you 're listening to the mixed signal and you 're tightening the boundaries , correcting the boundaries . you should n't have to tighten them too much because thilo 's program does that . phd d: it will miss them . it will miss most of the really short things . like that . phd d: , you have to say " - " more slowly to get c no , i ' m s i ' m actually serious . postdoc a: , but presumably , most of those they should be able to hear from the mixed signal unless they 're embedded in the heavil heavy overlap section when in which case they 'd be listening to the channels anyway . phd b: ca n't we could n't we just have , i , maybe this just does n't fit with the software , but i if i did n't know anything about transcriber and i was gon na make something to let them adjust boundaries , i would just show them one channel at a time , with the marks , and let them adju grad e: , but then they have to do but then they for this meeting they would have to do seven times real time , and it would probably be more than that . grad e: right ? because they 'd have to at least listen to each channel all the way through . phd b: but i but it 's very quick , you scan , if you have a display of the waveform . postdoc a: cuz you also see the breaths on the waveform . i ' ve looked at the int , s i ' ve tried to do that with a single channel , and you do see all sorts of other besides just the voice . grad e: , and that they 're going much more on acoustics than they are on visuals . postdoc a: what you the digital what the digital task that you had your interface ? , i know for a fact that one of those sh she could really she could judge what th what the number was based on the waveform . grad e: , that 's actually true . , you 're right . you 're right . , i found the same thing that when i was scanning through the wave form i could see when someone started to read digits just by the shapes . professor c: so , they 're looking at a mixed signal , or they 're looking what are they looking at visually ? postdoc a: , they have a choice . they could choose any signal to look at . i ' ve tried lookin but usually they look at the mixed . but i ' ve tried looking at the single signal and in order to judge when it was speech and when it was n't , but the problem is then you have breaths which show up on the signal . professor c: but the procedure that you 're imagining , people vary from this , is that they have the mixed signal wave form in front of them , professor c: and they have multiple , , let 's see , there is n't we do n't have transcription yet . so but there 's markers of some sort that have been happening automatically , and those show up on the mixed signal ? there 's a @ clicks ? postdoc a: they show up on the separate ribbons . so you have a separate ribbon for each channel , postdoc a: and i it 'll be because it 's being segmented as channel at a time with his with thilo 's new procedure , then you do n't have the correspondence of the times across the bins across the ribbons you could have professor c: ok , so the way you 're imaging is they play it , and they see this happened , then and if it 's about right , they just let it slide , and if it there 's a question on something , they stop and maybe look at the individual wave form . grad e: , they would n't look at it at this point . they would just listen . grad e: , the problem is that the interface does n't really allow you to switch visuals . grad e: the problem is that the tcl - tk interface with the visuals , it 's very slow to load waveforms . grad e: and so when i tried that was the first thing i tried when i first started it , postdoc a: , . visually . you can you can switch quickly between the audio , but you just ca n't get the visual display to show quickly . so you have to it takes , i , three , four minutes to , it takes long enough postdoc a: it takes long enough cuz it has to reload the i exactly what it 's doing frankly cuz but it t it takes long enough that it 's just not a practical alternative . grad e: it does some shape pre - computation so that it can then scroll it quickly , postdoc a: now you could set up multiple windows , each one with a different signal showing , and then look between the windows . grad e: , so we could use like x waves instead of transcriber , and it loads faster , certainly . grad e: so i actually before , dave gelbart did this , i did an interface which showed each waveform and ea a ribbon for each waveform , grad e: but the problem with it is even with just three waveforms it was just painfully slow to scroll . so you just scroll a screen and it would , go " kur - chunk ! " postdoc a: , i am thinking if we have a meeting with only four speakers and , you could fire up a transcriber interface for , y , in different windows , multiple ones , one for each channel . and it 's a hack but it would be one way of seeing the visual form . grad e: that if we decide that we need that they need to see the visuals , we need to change the interface so that they can do that . phd d: that 's actually what of , loading the chopped up waveforms , , that would make it faster phd b: the problem is if anything 's cut off , you ca n't expand it from the chopped up phd d: no , the individual channels that were chopped up that it 'd be to be able to go back and forth between those short segments . cuz you do n't really nee like nine tenths of the time you 're throwing most of them out , but what you need are tho that particular channel , or that particular location , and , might be , cuz we save those out already , to be able to do that . but it wo n't work for ibm , it only works here cuz they 're not saving out the individual channels . postdoc a: , i do think that this will be a doable procedure , and have them starting with mixed postdoc a: and , then when they get into overlaps , just have them systematically check all the channels to be that there is n't something hidden from audio view . grad e: , hopefully , the mixed signal , the overlaps are pretty audible because it is volume equalized . so they should be able to hear . the only problem is , counting how many and if they 're really correct or not . so , i . grad e: right but once that they happen , you can at least listen to the close talking , phd d: but you would know that they were there , and then you would switch . right . and then you would switch into the other professor c: but right now , to do this limitation , the switching is going to be switching of the audio ? is what she 's saying . grad e: did dave did dave do that change where you can actually just click rather than having to go up to the menu to listen to the individual channels ? postdoc a: i ' m not what click on the ribbon ? , you can get that , get you can get the , you can get it to switch audio ? , not last i tried , but , maybe he 's changed it again . grad e: we should get him to do that because , that would be much , much faster than going to the menu . postdoc a: i disagree . there 's a reason i disagree , and that is that , you it 's very good to have a dissociation between the visual and the audio . there 're times when i wanna hear the mixed signal , bu but i want to transcribe on the single channel . so right now grad e: just something so that it 's not in the menu option so that you can do it much faster . postdoc a: , that 's the i that might be a personal style thing . i find it really convenient the way it 's set up right now . grad e: it just seems to me that if you wanna quickly " was that jane , no , was that chuck , no , was that morgan " , right now , you have to go up to the menu , and each time , go up to the menu , select it , listen to that channel then click below , and then go back to the menu , select the next one , and then click below . professor c: so , done with that ? does any i forget , does anybody , working on any eurospeech submission related to this ? grad e: i would like to try to do something on digits but if we have time . , it 's due next friday so we have to do the experiments and write the paper . so , i ' m gon na try , but , we 'll just have to see . so actually i wanna get together with both andreas and , stephane with their respective systems . professor c: there was that we that 's right , we had that one conversation about , what did it mean for , one of those speakers to be pathological , was it a grad e: whereas it 's probably something pathologic and actually stephane 's results , confirm that . he s he did the aurora system also got very lousy average error , like fifteen or , fifteen to twenty percent average ? but then he ran it just on the lapel , and got about five or six percent word error ? so that means to me that somewhere in the other recordings there are some pathological cases . but , we th that may not be true . it may be just some of the segments they 're just doing a lousy job on . so i 'll listen to it and find out since you 'd actually split it up by segment . grad e: , he had sent that around to everyone , did you just sent that to me ? phd f: no , i d i did n't . since i considered those preliminary , i did n't . phd f: there were t there was one h one bump at ze around zero , which were the native speakers , phd f: whe those were the non - natives . and then there was another distinct bump at , like , a hundred , which must have been some problem . phd f: and there was this one meeting , i forget which one it was , where like , six out of the eight channels were all , like had a hundred percent error . grad e: which probably means like there was a th the recording interface crashed , or there was a short , someone was jiggling with a cord grad e: or , i extracted it incorrectly , it was labeled it was transcribed incorrectly , something really bad happened , and have n't listened to it yet to find out what it was . phd f: so , if i excluded the pathological ones , by definition , those that had like over ninety - five percent error rate , and the non - natives , then the average error rate was like one point four , phd f: which seemed reasonable given that , the models were n't tuned for it . and the grammar was n't tuned either . phd d: but there 's no overlap during the digit readings , so it should n't really matter . professor c: and so , cuz because what he was what i was saying when i looked at those things is it i was almost gon na call it quadrimodal because there was a whole lot of cases where it was zero percent . they just plain got it all right . and then there and then there was another bunch that were couple percent . phd f: but if you p if you actually histogrammed it , and it was a , it was zero was the most of them , phd f: but then there were the others were decaying from there . and then there was the bump for the non - natives and then the pathological ones , postdoc a: you did you have , something in the report about , for f , forced alignment ? have you have you started on that ? phd f: , , so i ' ve been struggling with the forced alignments . so the scheme that i drew on the board last time where we tried to , allow reject models for the s speech from other speakers , most of the time it does n't work very . so , and the i have n't done , the only way to check this right now was for me to actually load these into x waves and , plus the alignments , and s play them and see where the and it looks and so i looked of the utterances from you , chuck , in that one conversation , i which you probably know which one , it 's where you were on the lapel and morgan was sitting next to you and we can hear everything morgan says . but and some of what you , you also appear quite a bit in that cross - talk . so , i actually went through all of those , there were fifty - five segments , in x waves , and did a crude check , and more often than not , it gets it wrong . so there 's either the beginning , mostly the beginning word , where th you , , chuck talks somewhere into the segment , but the first , word of what he says , often " i " but it 's very reduced " i , " that 's just aligned to the beginning of someone else 's speech , in that segment , which is cross - talk . so , i ' m still tinkering with it , but it might be that we ca n't get clean alignments out of this out of those , channels , phd d: but it 's clear from dan that this is not something you can do in a short amount of time . phd d: so so we , we had spent a lot of time , writing up the hlt paper and we wanted to use that , analysis , but the hlt paper has , it 's a very crude measure of overlap . it 's not really something you could scientifically say is overlap , it 's just whether or not the , the segments that were all synchronized , whether there was some overlap somewhere . phd d: and , that pointed out some differences , so he thought if we can do something quick and dirty because dan said the cross - cancellation , it 's not straight - forward . if it were straight - forward then we would try it , but so , it 's good to hear that it was not straight - forward , thinking if we can get decent forced alignments , then at least we can do a overall report of what happens with actual overlap in time , but , phd b: he 's just saying you have to look over a longer time window when you do it . phd b: so you just have to look over longer time when you 're trying to align the things , you ca n't just look grad e: . are you talking about the fact that the recording software does n't do time - synchronous ? is that what you 're referring to ? that seems to me you can do that over the entire file and get a very accurate phd f: the issue was that you have to you have you first have to have a pretty good speech detection on the individual channels . phd d: and it 's dynamic , so i it was more dynamic than some simple models would be able t to so there are some things available , and i too much about this area where if people are n't moving around much than you could apply them , and it should work pretty if you took care of this recording time difference . phd d: which a at least is defined , but then if you add the dynamic aspect of adapting distances , then it was n't i it just was n't something that he could do quickly and not in time for us to be able to do something by two weeks from now , so . less than a week . so , so i what we can do if anything , that 's worth , a eurospeech paper at this point . phd f: , we would really need , ideally , a transcriber to time mark the , the be at least the beginning and s ends of contiguous speech . , and , then with the time marks , you can do an automatic comparison of your forced alignments . phd b: because really the at least in terms of how we were gon na use this in our system was to get an ideal an idea , for each channel about the start and end boundaries . phd f: no , that 's how i ' ve been looking at it . , i do n't care that the individual words are aligned correctly , but you do n't wanna , infer from the alignment that someone spoke who did n't . phd b: , maybe if it does n't work for lapel , we can just not use that phd f: i have n't i ha just have n't had the time to , do the same procedure on one of the so i would need a k i would need a channel that has a speaker whose who has a lot of overlap but s , is a non - lapel mike . and , where preferably , also there 's someone sitting next to them who talks a lot . phd f: maybe someone can help me find a good candidate and then i would be willing to phd b: we c what ? maybe the best way to find that would be to look through these . phd b: cuz you can see the seat numbers , and then you can see what type of mike they were using . and so we just look for , somebody sitting next to adam at one of the meetings phd d: it might not be a single person who 's always overlapping that person but any number of people , and , if you align the two hypothesis files across the channels , just word alignment , you 'd be able to find that . so so i that 's a last ther there 're a few things we could do . one is just do like non - lapels if we can get good enough alignments . another one was to try to get somehow align thilo 's energy segmentations with what we have . but then you have the problem of not knowing where the words are because these meetings were done before that segmentation . but maybe there 's something that could be done . phd b: what what is why do you need the , the forced alignment for the hlt for the eurospeech paper ? phd d: , i wanted to just do something not on recognition experiments because that 's ju way too early , but to be able to report , actual numbers . like if we had hand - transcribed pe good alignments or hand - checked alignments , then we could do this paper . it 's not that we need it to be automatic . but without knowing where the real words are , in time phd d: to to an overlap really if it 's really an overlap , or if it 's just a segment correlated with an overlap , phd d: and i that 's the difference to me between like a real paper and a , promissory paper . so , if we d it might be possible to take thilo 's output and like if you have , like right now these meetings are all , phd d: , they 're time - aligned , so if these are two different channels and somebody 's talking here and somebody else is talking here , just that word , if thilo can tell us that there 're boundaries here , we should be able to figure that out because the only thing transcribed in this channel is this word . but , , if there are things phd d: , if you have two and they 're at the edges , it 's like here and here , and there 's speech here , then it does n't really help you , so , phd d: it w it would , but , we exactly where the words are because the transcriber gave us two words in this time bin postdoc a: it 's a merging problem . if you had a if you had a s if you had a script which would postdoc a: i ' ve thought about this , and i ' ve discussed it with thilo , postdoc a: , the , i in principle i could imagine writing a script which would approximate it to some degree , but there is this problem of slippage , grad e: s cuz it seemed like most of the cases are the single word sorts , or at least a single phrase postdoc a: i would n't make that generalization cuz sometimes people will say , " and then i " and there 's a long pause and finish the sentence and sometimes it looks coherent and the it 's not a simple problem . but it 's really and then it 's coupled with the problem that sometimes , with a fricative you might get the beginning of the word cut off and so it 's coupled with the problem that thilo 's is n't perfect either . , we ' ve i th it 's like you have a merging problem plus so merging plus this problem of , not y i if the speech - nonspeech were perfect to begin with , the detector , that would already be an improvement , but that 's impossible , i that 's too much to ask . and so i and may , it 's that there always th there would have to be some hand - tweaking , but it 's possible that a script could be written to merge those two types of things . i ' ve discussed it with thilo and in terms of not him doing it , but we discussed some of the parameters of that and how hard it would be to in principle to write something that would do that . phd d: , i in the future it wo n't be as much as an issue if transcribers are using the tightened boundaries to start with , then we have a good idea of where the forced alignment is constrained to . postdoc a: , it 's just , a matter of we had the revolution of improved , interface , one month too late , postdoc a: so it 's just a matter of , from now on we 'll be able to have things channelized to begin with . grad e: and we 'll just have to see how hard that is . so so whether the corrections take too much time . i was just thinking about the fact that if thilo 's missed these short segments , that might be quite time - consuming for them to insert them . phd d: but he also can adjust this minimum time duration constraint and then what you get is noises mostly , grad e: it might be easier to delete something that 's wrong than to insert something that 's missing . professor c: if you can feel confident that what the , that there 's actually something that you 're not gon na miss something , grad e: cuz then you just delete it , and you do n't have to pick a time . postdoc a: the problem is i it 's a really good question , and i really find it a pain in the neck to delete things because you have to get the mouse up there on the t on the text line and i and otherwise you just use an arrow to get down , i it depends on how lar th there 's so many extra things that would make it one of them harder than the other , or vice versa . it 's not a simple question . but , , in principle , like , if one of them is easier then to bias it towards whichever one 's easier . grad e: , i the semantics are n't clear when you delete a segment , because you would say you would have to determine what the surroundings were . phd d: you could just say it 's a noise , though , and write , a post - processor will just all you have to do is just phd d: or just say it 's just put " x , " , like " not speech " , grad e: but it 's the semantics that are questionable to me , that you delete something so let 's say someone is talking to here , and then you have a little segment here . , is that part of the speech ? is it part of the nonspeech ? , w what do you embed it in ? phd d: there 's something , though , about keeping , and this is probably another discussion , keeping the that thilo 's detector detected as possible speech and just marking it as not speech than deleting it . because then when you align it , then the alignment can you can put a reject model or whatever , grad e: , i see . so then they could just like put that 's what you meant by just put an " x " there . grad e: so so all they so that all they would have to do is put like an " x " there . grad e: so blank for silence , " s for speech , " x for something else . phd d: whatever , that 's actually a better way to do it cuz the a the forced alignment will probably be more consistent than postdoc a: , like , there 's a complication which is that you can have speech and noise in s postdoc a: , on the same channel , the same speaker , so now sometimes you get a ni microphone pop and , , there 're these fuzzy hybrid cases , and then the problem with the boundaries that have to be shifted around . it 's not a simple problem . phd d: anyway , quick question , though , at a high level do people think , let 's just say that we 're moving to this new era of like using the , pre - segmented t , non - synchronous conversations , does it make sense to try to take what we have now , which are the ones that , we have recognition on which are synchronous and not time - tightened , and try to get something out of those for purposes of illustrating the structure and the nature of the meetings , or is it better to just , forget that and tr , it 's grad e: , we 'll have to , eventually . and my hope was that we would be able to use the forced alignment to get it . phd d: is it worth if we ca n't then we can fake it even if we 're we report , we 're wrong twenty percent of the time or ten percent of the time . grad e: , i ' m thinking are you talking about for a paper , or are talking about for the corpus . phd d: actually that 's a good question because we 'd have to completely redo those meetings , and we have like ten of them now . grad e: we would n't have to re - do them , we would just have to edit them . postdoc a: that when brian comes , this 'll be an interesting aspect to ask him as b grad e: , brian . you s you said ryan . and it 's like , " who 's ryan ? " phd d: , no , that 's a good point , though , because for feature extraction like for prosody , the meetings we have now , it 's a good chunk of data we need to get a decent f postdoc a: and that 's what , ever since the february meeting that i transcribed from last year , forced alignment has been on the table as a way of cleaning them up later . postdoc a: and and so i ' m hopeful that 's possible . i know that there 's complication in the overlap sections and with the lapel mikes , phd d: , we might be able , at the very worst , we can get transcribers to correct the cases where , you have a good estimate where these places are because the recognition 's so poor . right ? phd d: so we need some way to push these first chunk of meetings into a state where we get good alignments . phd f: i ' m probably going to spend another day or so trying to improve things by , by using , acoustic adaptation . , the right now i ' m using the unadapted models for the forced alignments , and it 's possible that you get considerably better results if you , manage to adapt the , phone models to the speaker and the reject model to the to all the other speech . , so phd b: could you could you at the same time adapt the reject model to the speech from all the other channels ? phd b: , not just the speech from that of the other people from that channel , but the speech from the a actual other channels . phd d: but what you do wanna do is take the , even if it 's klugey , take the segments the synchronous segments , the ones from the hlt paper , where only that speaker was talking . phd d: use those for adaptation , cuz if you use everything , then you get all the cross - talk in the adaptation , and it 's just blurred . phd d: and that we know , we have that . and it 's about roughly two - thirds , very roughly averaged . that 's not completely negligible . like a third of it is bad for adaptation or so . professor c: i we 're not turning in to eurospeech , a redo of the hlt paper . that i do n't wanna do that , phd d: morgan 's talk went very it woke , it was really a presented and got people laughing grad e: especially the batteried meter popping up , that was hilarious . right when you were talking about that . grad e: he he was onto the bullet points about talking about the little hand - held , and trying to get lower power and so on , grad e: and microsoft pops up a little window saying " your batteries are now fully charged . " grad e: i ' m thinking about scripting that for my talk , put a little script in there to say " your batteries are low " right when i ' m saying that . professor c: no , i in your case , you were joking about it , but , your case the fact that your talking about similar things at a couple of conferences , it 's not these are conferences that have d really different emphases . whereas hlt and eurospeech , pretty similar , so i ca n't see really just putting in the same thing , phd d: the hlt paper is really more of a introduction - to - the - project paper , and , professor c: or some or some , i would see eurospeech if we have some eurospeech papers , these will be paper p , submissions . these will be things that are particular things , aspects of it that we 're looking at , rather than , attempt at a global paper about it . postdoc a: i did go through one of these meetings . i had , one of the transcribers go through and tighten up the bins on one of the , nsa meetings , and then i went through afterwards and double - checked it so that one is really very accurate . i men i mentioned the link . i sent that one ? postdoc a: , i ' m trying to remember i do n't remember the number off hand . postdoc a: it 's one of the nsa 's . i sent email before the conference , before last week . bef - what is wednesday , thursday . postdoc a: i ' m that one 's accurate , i ' ve been through it myself . grad e: , that 's what i was gon na say . the problem with those , they 're all german . phd d: and e and extremely hard to follow , like word - wise , i bet the transcri , i have no idea what they 're talking about , postdoc a: i corrected it for a number of the words . i ' m that , they 're accurate now . phd d: , this is tough for a language model probably but that might be useful just for speech . grad e: , before you l go i it 's alright for you to talk a little without the mike i noticed you adjusting the mike a lot , did it not fit you ? phd b: actually if you have a larger head , that mike 's got ta go farther away which means the balance is gon na make it wanna tip down . grad e: cuz , i ' m just thinking , we were we 're we ' ve been talking about changing the mikes , for a while , and if these are n't acoustically they seem really good , but if they 're not comfortable , we have the same problems we have with these stupid things . postdoc a: it 's com this is the first time i ' ve worn this , i find it very comfortable . grad e: i find it very comfortable too , but , it looked like andreas was having problems , and morgan was saying it grad e: you did wear it this morning ? ok , it 's off , so you can put it on . phd b: i , i do n't want it on , i just want to , say what is a problem with this . if you are wearing this over your ears and you ' ve got it all the way out here , then the balance is gon na want to pull it this way . where as if somebody with a smaller head has it back here , grad e: wh what it 's supposed to do is the backstrap is supposed to be under your crown , and so that should be if it 's right against your head there , which is what it 's supposed to be , that balances it so it does n't slide up . grad e: , right below if you feel the back of your head , you feel a little lump , and so it 's supposed to be right under that . postdoc a: , wonder if it 's if he was wearing it over his hair instead of under his hair . grad e: probably it was it probably just was n't tight enough to the back of his head . , so the directions do talk about bending it to your size , which is not really what we want . phd b: the other thing that would do it would be to hang a five pound weight off the back . grad e: we at boeing i used i was doing augmented reality so they had head - mounts on , and we had a little jury - rigged one with a welder 's helmet , grad e: and we had just a bag with a bunch of marbles in it as a counter - balance . professor c: or maybe this could be helpful just for evening the conversation between people . if people those who talk a lot have to wear heavier weights , and , so , what was i gon na say ? , i was gon na say , i had these , conversations with nist folks also while i was there and , , so they have their plan for a room , with , mikes in the middle of the table , and , close - mounted mikes , and they 're talking about close - mounted and lapels , just cuz professor c: and , like multiple video cameras coverin covering every everybody every place in the room , professor c: , the mikes in the middle , the head - mounted mikes , the lapel mikes , the array , with , there 's some discussion of fifty - nine , professor c: they might go down to fifty - seven because , there is , some pressure from a couple people at the meeting for them to use a kemar head . i forget what kemar , stands for , but what it is it 's dummy head that is very specially designed , and , so what they 're actually doing is they 're really there 's really two recording systems . professor c: so they may not be precisely synchronous , but the but there 's two recording systems , one with , twenty - four channels , and one with sixty - four channels . and the sixty - four channel one is for the array , but they ' ve got some empty channels there , and anyway they like they 're saying they may give up a couple if for the kemar head if they go with that . grad e: , h , j jonathan fiscus did say that , they have lots of software for doing calibration for skew and offset between channels grad e: their their legal issues wo n't allow them to do otherwise . but it sounded like they were pretty thought out grad e: and they 're gon na be real meetings , it 's just that they 're with str with people who would not be meeting otherwise . grad e: it 's just informal . , i also sat and chatted with several of the nist folks . they seemed like a good group . professor c: , we sh we should just have you read it , but , i mea ba i , we ' ve all got these little proceedings , professor c: but , , it was about , , going to a new task where you have insufficient data and using data from something else , and adapting , and how that works . , so it was pretty related to what liz and andreas did , except that this was not with meeting , it was with , like they s did n't they start off with broadcast news system ? and then they went to grad e: the - their broadcast news was their acoustic models and then all the other tasks were much simpler . so they were command and control and that thing . grad e: that it not only works , in some cases it was better , which was pretty interesting , but that 's cuz they did n't control for parameters . phd b: did they ever try going the other direction from simpler task to more complicated tasks , grad e: , one of the big problems with that is often the simpler task is n't fully does n't have all the phones in it , and that makes it very hard . but i ' ve done the same thing . i ' ve been using broadcast news nets for digits , like for the spr speech proxy thing that i did ? that 's what i did . so . it works . professor c: , we should probably what would actually what we should do , i have n't said anything about this , but probably the five of us should pick out a paper or two that , , got our interest , and we should go around the room at one of the tuesday lunch meetings and say , what was good about the conference , phd d: , the summarization was interesting , i anything about that field , but for this proposal on meeting summarization , , it 's a far cry because they were n't working with meeting type data , but he got an overview on some of the different approaches , phd d: but , there 's that 's a huge field and probably the groups there may not be representative of the field , i exactly that everyone submits to this particular conference , phd d: yet there was , let 's see , this was on the last day , mitre , bbn , and , prager phd d: this was wednesday morning . the sentence ordering one , was that barselou , and these guys ? phd d: anyway , i it 's in the program , i should have read it to remind myself , but that 's useful and like when mari and katrin and jeff are here it 'd be good to figure out some kinds of things that we can start doing maybe just on the transcripts cuz we already have postdoc a: , i like the idea that adam had of , z maybe generating minutes based on some of these things that we have because it would be easy to do that just , it has to be , though , someone from this group because of the technical nature of the thing . grad e: someone who actually does take notes , i ' m very bad at note - taking . phd d: but what 's interesting is there 's all these different evaluations , like just , how do you evaluate whether the summary is good or not , phd d: and that 's what 's was interesting to me is that there 's different ways to do it , grad e: and as i said , i like the microsoft talk on scaling issues in , word sense disambiguation , that was interesting . grad e: it it was the only one it was the only one that had any real disagreement about . professor c: , i did n't have as much disagreement as i would have liked , but i did n't wanna i wouldn i did n't wanna get into it because , , it was the application was one i did n't know anything about , it just would have been , me getting up to be argumentative , but , , the missing thi so what they were saying it 's one of these things is , all you need is more data , but i mea i wh it @ that 's dissing it , improperly , it was a study . , they were doing this it was n't word - sense disambiguation , it was grad e: but it was a very simple case of " to " versus " too " versus " two " and " there " , " their " , " they 're " phd d: and there and their and that you could do better with more data , that 's clearly statistically professor c: and so , what they did was they had these different kinds of learning machines , and they had different amounts of data , and so they did like , eight different methods that everybody , , argues about , " my learning machine is better than your learning machine . " and , they were started off with a million words that they used , which was evidently a number that a lot of people doing that particular task had been using . so they went up , being microsoft , they went up to a billion . and then they had this log scale showing a , and naturally everything gets professor c: they , it 's a big company , i did n't mean it as a ne anything negative , but i grad e: , the reason they can do that , is that they assumed that text that they get off the web , like from wall street journal , is correct , and edit it . so that 's what they used as training data . it 's just saying if it 's in this corpus it 's correct . professor c: but , yes . there was the effect that , one would expect that you got better and better performance with more and more data . , but the real point was that the different learning machines are all over the place , and by going up significantly in data you can have much bigger effect then by switching learning machines and furthermore which learning machine was on top depended on where you were in this picture , so , professor c: could be . so so , that was , it 's a good point , but the problem i had with it was that the implications out of this was that , the choices you make about learning machines were therefore irrelevant which is not at n t as for as i know in tasks i ' m more familiar with @ is not true . what i what is true is that different learning machines have different properties , and you wanna those properties are . and someone else implied that we s , a all the study of learning machine we still what those properties are . we them perfectly , but we know that some kinds use more memory and some other kinds use more computation and some are hav have limited discrimination , but are just easy to use , and others are phd b: but does n't their conclusion just you could have guessed that before they even started ? because if you assume that these learning things get better and better , phd b: then as you approach there 's a point where you ca n't get any better , you get everything right . grad e: no , but there was still a spread . they were n't all up they were n't converging . phd b: but what i ' m saying is that th they have to , as they all get better , they have to get closer together . grad e: they were all still spread . but they right , right . but they had n't even come close to that point . all the tasks were still improving when they hit a billion . phd b: but they 're all going the same way , so you have to get closer . professor c: that 's getting cl , the spread was still pretty wide that 's th that 's true , but , it would be irntu intu intuition that this would be the case , but , to really see it and to have the intuition is quite different , somebody w let 's see who was talking about earlier that the effect of having a lot more data is quite different in switchboard than it is in broadcast news , phd d: so it depends a lot on whether , it disambiguation is exactly the case where more data is better , you 're you can assume similar distributions , but if you wanted to do disambiguation on a different type of , test data then your training data , then that extra data would n't generalize , grad e: but , one of their p they they had a couple points . w , one of them was that " , maybe simpler algorithms and more data are is better " . less memory , faster operation , simpler . because their simplest , most brain - dead algorithm did pretty darn when you got gave it a lot more data . and then also they were saying , " , m you have access to a lot more data . why are you sticking with a million words ? " , their point was that this million - word corpus that everyone uses is ten or fifteen years old . and everyone is still using it , professor c: but anyway , i it 's just the i it 's not really the conclusion they came to so much , as the conclusion that some of the , commenters in the crowd came up with professor c: that , , this therefore is further evidence that , more data is really all you should care about , and that was just going too far the other way , professor c: and the , one person ga g got up and made a brief defense , but it was a different grounds , it was that , i w the reason people were not using so much data before was not because they were stupid or did n't realize data was important , but th they did n't have it available . , but the other point to make a again is that , machine learning still does matter , but it matters more in some situations than in others , and it and also there 's not just mattering or not mattering , but there 's mattering in different ways . , you might be in some situation where you care how much memory you 're using , or you care , what recall time is , or you care , and phd d: or done another language , or , you so there 's papers on portability and rapid prototyping and blah - blah , and then there 's people saying , " , just add more data . " professor c: so these , th the in the speech side , the thing that @ always occurs to me is that if you one person has a system that requires ten thousand hours to train on , and the other only requires a hundred , and they both do about the same because the hundred hour one was smarter , that 's gon na be better . because people , there is n't gon na be just one system that people train on and then that 's it for the r for all of time . , people are gon na be doing other different things , and so it these things matters matter . postdoc a: so , this was a very provocative slide . she put this up , and it was like this is this p people kept saying , " can i see that slide again ? " and then they 'd make a comment , and one person said , - known person said , , " before you dismiss forty - five years including my work " phd d: but th , the same thing has happened in computational linguistics , you look at the acl papers coming out , and now there 's a turn back towards , ok we ' ve learned statistic , we 're getting what we expect out of some statistical methods , the there 's arguments on both sides , grad e: is that all of them are based on all the others , just , you ca n't say grad e: , so . and i ' m saying the same thing happened with speech recognition , for a long time people were hand - c coding linguistic rules and then they discovered machine - learning worked better . and now they 're throwing more and more data and worrying perhaps worrying less and less about , the exact details of the algorithms . grad e: shall we read some digits ? are we gon na do one at a time ? or should we read them all agai at once again . professor c: let 's do it all at once . we @ let 's try that again . grad e: so remember to read the transcript number so that , everyone knows that what it is . and ready ? three , two , one .
the berkeley meeting recorder group discussed the preparation of a data sample for ibm , the manual adjustment of time bins by transcribers , recognition results for a test set of digits data , and forced alignments. participants also talked about eurospeech 2001 submissions , and exchanged comments on the proceedings of the recently attended human language technologies conference ( hlt'01 ). preliminary recognition results were presented for a subset of digits data. efforts to deal with cross-talk and improve forced alignments for non-digits data were also discussed. subsequent manual adjustment of speech and non-speech boundaries will be delegated to the transcriber pool. a subset of meeting recorder data will be prepared ( i.e . pre-segmented and manually adjusted ) for delivery to ibm. the transcriber interface may require modifications if it becomes necessary for transcribers to quickly switch among waveform displays. transcribers risk overlooking speech that is deeply embedded in the mixed signal. should transcriptions be derived from each of the close-talking channels or from the mixed signal alone? the pre-segmentation tool does not perform well on short utterances , e.g . backchannels. the transcriber interface does not allow the user to quickly switch among visual displays , i.e . multi-channel waveforms. forced alignments were problematic for non-digits data due to cross-talk. this problem was reported to be particularly bad for cross-talk featuring more than one word. echo cancellation was considered as a means of improving forced alignments , but was ultimately deemed to be too time-consuming given the dynamic aspect of adapting distances between speakers. comparing error rates in terms of the recording device used , i.e . lapel versus wireless microphones , is tedious. deleting segments of the recordings is expected to be very time-consuming for transcribers. more results are needed for generating adequate submissions for eurospeech'01. participants have complained that the head-mounted microphone is uncomfortable. one meeting recording has been channelized and pre-segmented for delivery to ibm. a sample of digits data is being prepared for ibm. preliminary recognition results were obtained for a subset of digits data. the error rate distribution was multimodal , reflecting differences in performance for native versus non-native speakers , and also possible pre-processing errors. future efforts will involve an attempt to get good forced alignments on digits data and generate a report for eurospeech'01. a program has been developed for replacing sections of recorded speech with editing bleeps. the tightening of time bins for one nsa meeting was checked and judged to be highly accurate. efforts are ongoing to improve forced alignments for a subset of non-digits data , including acoustic adaptation manipulations.
###dialogue: grad e: it depends on if the temp files are there or not , that at least that 's my current working hypothesis , that what happens is it tries to clear the temp files and if they 're too big , it crashes . phd b: when the power went out the other day and i restarted it , it crashed the first time . grad e: it 's i they 're called temp files , but they 're not actually in the temp directory they 're in the scratch , they 're not backed up , but they 're not erased either on power failure . phd d: but that 's usually the meeting that i recorded , and it neve it does n't crash on me . phd b: this was n't actually , this was n't a before your meeting , this was , tuesday afternoon when , , robert just wanted to do a little recording , professor c: i when would be a good excuse for it , but ca n't to be giving a talk t and use the example from last week with everybody t doing the digits at once . phd b: talk about a good noise shield . ? you wanted to pe keep people from listening in , you could like have that playing outside the room . nobody could listen in . grad e: i since i ' ve been gone all week , i did n't send out a reminder for an agenda , grad e: do we have anything to talk about or should we just read digits and go ? phd d: so , are we do we have like an agenda or anything that we should be phd f: so so the deal is that , , be available after , like ten thirty . i how s how early you wanted to grad e: eurospeech is due on friday and then i ' m going down to san , san jose friday night , so , if we start and late saturday that 's a good thing . professor c: , and they 'll end up here . so b and also brian kingsbury is actually flying from , the east coast on that morning . professor c: so , i i will be , he 's taking a very early flight and we do have the time work difference running the right way , but i still think that there 's no way we could start before eleven . it might end up really being twelve . so when we get closer we 'll find people 's plane schedules , and let everybody know . , so . that 's good . grad e: but , maybe an agenda , or at least some things to talk about would be a good idea . professor c: we can start gathering those ideas , but then we should firm it up by next thursday 's meeting . postdoc a: will we have time to , to prepare something that we in the format we were planning for the ibm transcribers by then , or ? phd b: yes , he 's i ' m , i should have forwarded that along . , i mentioned at the last meeting , he said that , he talked to them and it was fine with the beeps they would be that 's easy for them to do . grad e: great . ok . so , , though thi - thilo is n't here , but , i have the program to insert the beeps . what i do n't have is something to parse the output of the channelized transcripts to find out where to put the beeps , but that should be really easy to do . so do we have a meeting that 's been done with , grad e: that we ' ve tightened it up to the point where we can actually give it to ibm and have them try it out ? postdoc a: he generated , a channel - wise presegmented version of a meeting , but it was robustness rather than edu so i depends on whether we 're willing to use robustness ? grad e: we had talked about doing maybe edu as a good choice , though . , whatever we have . phd b: we ' ve talked about that as being the next ones we wanted to transcribe . grad e: hand - checked ? cuz that was one of the processes we were talking about as . postdoc a: that 's right . , we have n't done that . i could set someone on that tomorrow . phd b: and we probably do n't have to do necessarily a whole meeting for that if we just wanna send them a sample to try . phd b: i , maybe we can figure out how long it 'll take @ to do . grad e: i , it seems to me w we probably should go ahead and do a whole meeting because we 'll have to transcribe the whole meeting anyway sometime . professor c: yes except that if they had if there was a choice between having fifteen minutes that was fully the way you wanted it , and having a whole meeting that did n't get at what you wanted for them it 's just dependent of how much phd b: i , the only thing i ' m not about is , how quickly can the transcribers scan over and fix the boundaries , grad e: it 's gon na be one or two times real time at , excuse me , two or more times real time , right ? cuz they have to at least listen to it . professor c: can we pipeline it so that say there 's , the transcriber gets done with a quarter of the meeting and then we you run it through this other ? grad e: the other is i b i ' m just thinking that from a data keeping - track - of - the - data point of view , it may be best to send them whole meetings at a time and not try to send them bits and pieces . professor c: ok , so . , that 's right . so the first thing is the automatic thing , and then it 's the transcribers tightening up , and then it 's ibm . professor c: ok , so you might as ha run the automatic thing over the entire meeting , and then , you would give ibm whatever was fixed . professor c: , but start from the beginning and go to the end , right ? so if they were only half way through then that 's what you 'd give ibm . right ? phd b: as of what point ? the i the question on my mind is do we for the transcribers to adjust the marks for the whole meeting before we give anything to ibm , or do we go ahead and send them a sample ? let their professor c: why would n't we s @ w i if they were going sequentially through it , why would n't we give them i are we trying to get something done by the time brian comes ? professor c: so if we were , then it seems like giving them something , whatever they had gotten up to , would be better than nothing . grad e: , i do n't think , h they typically work for what , four hours , something like that ? grad e: the they should be able to get through a whole meeting in one sitting . i would think , unless it 's a lot harder than we think it is , which it could be , certainly . phd b: so it 's gon na be , depending on the number of people in the meeting , postdoc a: i there is this issue of , if the segmenter thought there was no speech on a particular stretch , on a particular channel , and there really was , then , if it did n't show up in a mixed signal to verify , then it might be overlooked , so , the question is " should a transcriber listen to the entire thing or can it g can it be based on the mixed signal ? " and i th so far as i ' m concerned it 's fine to base it on the mixed signal at this point , and grad e: that 's what it seems to me too , in that if they need to , just like in the other cases , they can listen to the individual , if they need to . grad e: so , they have the normal channeltrans interface where they have each individual speaker has their own line , but you 're listening to the mixed signal and you 're tightening the boundaries , correcting the boundaries . you should n't have to tighten them too much because thilo 's program does that . phd d: it will miss them . it will miss most of the really short things . like that . phd d: , you have to say " - " more slowly to get c no , i ' m s i ' m actually serious . postdoc a: , but presumably , most of those they should be able to hear from the mixed signal unless they 're embedded in the heavil heavy overlap section when in which case they 'd be listening to the channels anyway . phd b: ca n't we could n't we just have , i , maybe this just does n't fit with the software , but i if i did n't know anything about transcriber and i was gon na make something to let them adjust boundaries , i would just show them one channel at a time , with the marks , and let them adju grad e: , but then they have to do but then they for this meeting they would have to do seven times real time , and it would probably be more than that . grad e: right ? because they 'd have to at least listen to each channel all the way through . phd b: but i but it 's very quick , you scan , if you have a display of the waveform . postdoc a: cuz you also see the breaths on the waveform . i ' ve looked at the int , s i ' ve tried to do that with a single channel , and you do see all sorts of other besides just the voice . grad e: , and that they 're going much more on acoustics than they are on visuals . postdoc a: what you the digital what the digital task that you had your interface ? , i know for a fact that one of those sh she could really she could judge what th what the number was based on the waveform . grad e: , that 's actually true . , you 're right . you 're right . , i found the same thing that when i was scanning through the wave form i could see when someone started to read digits just by the shapes . professor c: so , they 're looking at a mixed signal , or they 're looking what are they looking at visually ? postdoc a: , they have a choice . they could choose any signal to look at . i ' ve tried lookin but usually they look at the mixed . but i ' ve tried looking at the single signal and in order to judge when it was speech and when it was n't , but the problem is then you have breaths which show up on the signal . professor c: but the procedure that you 're imagining , people vary from this , is that they have the mixed signal wave form in front of them , professor c: and they have multiple , , let 's see , there is n't we do n't have transcription yet . so but there 's markers of some sort that have been happening automatically , and those show up on the mixed signal ? there 's a @ clicks ? postdoc a: they show up on the separate ribbons . so you have a separate ribbon for each channel , postdoc a: and i it 'll be because it 's being segmented as channel at a time with his with thilo 's new procedure , then you do n't have the correspondence of the times across the bins across the ribbons you could have professor c: ok , so the way you 're imaging is they play it , and they see this happened , then and if it 's about right , they just let it slide , and if it there 's a question on something , they stop and maybe look at the individual wave form . grad e: , they would n't look at it at this point . they would just listen . grad e: , the problem is that the interface does n't really allow you to switch visuals . grad e: the problem is that the tcl - tk interface with the visuals , it 's very slow to load waveforms . grad e: and so when i tried that was the first thing i tried when i first started it , postdoc a: , . visually . you can you can switch quickly between the audio , but you just ca n't get the visual display to show quickly . so you have to it takes , i , three , four minutes to , it takes long enough postdoc a: it takes long enough cuz it has to reload the i exactly what it 's doing frankly cuz but it t it takes long enough that it 's just not a practical alternative . grad e: it does some shape pre - computation so that it can then scroll it quickly , postdoc a: now you could set up multiple windows , each one with a different signal showing , and then look between the windows . grad e: , so we could use like x waves instead of transcriber , and it loads faster , certainly . grad e: so i actually before , dave gelbart did this , i did an interface which showed each waveform and ea a ribbon for each waveform , grad e: but the problem with it is even with just three waveforms it was just painfully slow to scroll . so you just scroll a screen and it would , go " kur - chunk ! " postdoc a: , i am thinking if we have a meeting with only four speakers and , you could fire up a transcriber interface for , y , in different windows , multiple ones , one for each channel . and it 's a hack but it would be one way of seeing the visual form . grad e: that if we decide that we need that they need to see the visuals , we need to change the interface so that they can do that . phd d: that 's actually what of , loading the chopped up waveforms , , that would make it faster phd b: the problem is if anything 's cut off , you ca n't expand it from the chopped up phd d: no , the individual channels that were chopped up that it 'd be to be able to go back and forth between those short segments . cuz you do n't really nee like nine tenths of the time you 're throwing most of them out , but what you need are tho that particular channel , or that particular location , and , might be , cuz we save those out already , to be able to do that . but it wo n't work for ibm , it only works here cuz they 're not saving out the individual channels . postdoc a: , i do think that this will be a doable procedure , and have them starting with mixed postdoc a: and , then when they get into overlaps , just have them systematically check all the channels to be that there is n't something hidden from audio view . grad e: , hopefully , the mixed signal , the overlaps are pretty audible because it is volume equalized . so they should be able to hear . the only problem is , counting how many and if they 're really correct or not . so , i . grad e: right but once that they happen , you can at least listen to the close talking , phd d: but you would know that they were there , and then you would switch . right . and then you would switch into the other professor c: but right now , to do this limitation , the switching is going to be switching of the audio ? is what she 's saying . grad e: did dave did dave do that change where you can actually just click rather than having to go up to the menu to listen to the individual channels ? postdoc a: i ' m not what click on the ribbon ? , you can get that , get you can get the , you can get it to switch audio ? , not last i tried , but , maybe he 's changed it again . grad e: we should get him to do that because , that would be much , much faster than going to the menu . postdoc a: i disagree . there 's a reason i disagree , and that is that , you it 's very good to have a dissociation between the visual and the audio . there 're times when i wanna hear the mixed signal , bu but i want to transcribe on the single channel . so right now grad e: just something so that it 's not in the menu option so that you can do it much faster . postdoc a: , that 's the i that might be a personal style thing . i find it really convenient the way it 's set up right now . grad e: it just seems to me that if you wanna quickly " was that jane , no , was that chuck , no , was that morgan " , right now , you have to go up to the menu , and each time , go up to the menu , select it , listen to that channel then click below , and then go back to the menu , select the next one , and then click below . professor c: so , done with that ? does any i forget , does anybody , working on any eurospeech submission related to this ? grad e: i would like to try to do something on digits but if we have time . , it 's due next friday so we have to do the experiments and write the paper . so , i ' m gon na try , but , we 'll just have to see . so actually i wanna get together with both andreas and , stephane with their respective systems . professor c: there was that we that 's right , we had that one conversation about , what did it mean for , one of those speakers to be pathological , was it a grad e: whereas it 's probably something pathologic and actually stephane 's results , confirm that . he s he did the aurora system also got very lousy average error , like fifteen or , fifteen to twenty percent average ? but then he ran it just on the lapel , and got about five or six percent word error ? so that means to me that somewhere in the other recordings there are some pathological cases . but , we th that may not be true . it may be just some of the segments they 're just doing a lousy job on . so i 'll listen to it and find out since you 'd actually split it up by segment . grad e: , he had sent that around to everyone , did you just sent that to me ? phd f: no , i d i did n't . since i considered those preliminary , i did n't . phd f: there were t there was one h one bump at ze around zero , which were the native speakers , phd f: whe those were the non - natives . and then there was another distinct bump at , like , a hundred , which must have been some problem . phd f: and there was this one meeting , i forget which one it was , where like , six out of the eight channels were all , like had a hundred percent error . grad e: which probably means like there was a th the recording interface crashed , or there was a short , someone was jiggling with a cord grad e: or , i extracted it incorrectly , it was labeled it was transcribed incorrectly , something really bad happened , and have n't listened to it yet to find out what it was . phd f: so , if i excluded the pathological ones , by definition , those that had like over ninety - five percent error rate , and the non - natives , then the average error rate was like one point four , phd f: which seemed reasonable given that , the models were n't tuned for it . and the grammar was n't tuned either . phd d: but there 's no overlap during the digit readings , so it should n't really matter . professor c: and so , cuz because what he was what i was saying when i looked at those things is it i was almost gon na call it quadrimodal because there was a whole lot of cases where it was zero percent . they just plain got it all right . and then there and then there was another bunch that were couple percent . phd f: but if you p if you actually histogrammed it , and it was a , it was zero was the most of them , phd f: but then there were the others were decaying from there . and then there was the bump for the non - natives and then the pathological ones , postdoc a: you did you have , something in the report about , for f , forced alignment ? have you have you started on that ? phd f: , , so i ' ve been struggling with the forced alignments . so the scheme that i drew on the board last time where we tried to , allow reject models for the s speech from other speakers , most of the time it does n't work very . so , and the i have n't done , the only way to check this right now was for me to actually load these into x waves and , plus the alignments , and s play them and see where the and it looks and so i looked of the utterances from you , chuck , in that one conversation , i which you probably know which one , it 's where you were on the lapel and morgan was sitting next to you and we can hear everything morgan says . but and some of what you , you also appear quite a bit in that cross - talk . so , i actually went through all of those , there were fifty - five segments , in x waves , and did a crude check , and more often than not , it gets it wrong . so there 's either the beginning , mostly the beginning word , where th you , , chuck talks somewhere into the segment , but the first , word of what he says , often " i " but it 's very reduced " i , " that 's just aligned to the beginning of someone else 's speech , in that segment , which is cross - talk . so , i ' m still tinkering with it , but it might be that we ca n't get clean alignments out of this out of those , channels , phd d: but it 's clear from dan that this is not something you can do in a short amount of time . phd d: so so we , we had spent a lot of time , writing up the hlt paper and we wanted to use that , analysis , but the hlt paper has , it 's a very crude measure of overlap . it 's not really something you could scientifically say is overlap , it 's just whether or not the , the segments that were all synchronized , whether there was some overlap somewhere . phd d: and , that pointed out some differences , so he thought if we can do something quick and dirty because dan said the cross - cancellation , it 's not straight - forward . if it were straight - forward then we would try it , but so , it 's good to hear that it was not straight - forward , thinking if we can get decent forced alignments , then at least we can do a overall report of what happens with actual overlap in time , but , phd b: he 's just saying you have to look over a longer time window when you do it . phd b: so you just have to look over longer time when you 're trying to align the things , you ca n't just look grad e: . are you talking about the fact that the recording software does n't do time - synchronous ? is that what you 're referring to ? that seems to me you can do that over the entire file and get a very accurate phd f: the issue was that you have to you have you first have to have a pretty good speech detection on the individual channels . phd d: and it 's dynamic , so i it was more dynamic than some simple models would be able t to so there are some things available , and i too much about this area where if people are n't moving around much than you could apply them , and it should work pretty if you took care of this recording time difference . phd d: which a at least is defined , but then if you add the dynamic aspect of adapting distances , then it was n't i it just was n't something that he could do quickly and not in time for us to be able to do something by two weeks from now , so . less than a week . so , so i what we can do if anything , that 's worth , a eurospeech paper at this point . phd f: , we would really need , ideally , a transcriber to time mark the , the be at least the beginning and s ends of contiguous speech . , and , then with the time marks , you can do an automatic comparison of your forced alignments . phd b: because really the at least in terms of how we were gon na use this in our system was to get an ideal an idea , for each channel about the start and end boundaries . phd f: no , that 's how i ' ve been looking at it . , i do n't care that the individual words are aligned correctly , but you do n't wanna , infer from the alignment that someone spoke who did n't . phd b: , maybe if it does n't work for lapel , we can just not use that phd f: i have n't i ha just have n't had the time to , do the same procedure on one of the so i would need a k i would need a channel that has a speaker whose who has a lot of overlap but s , is a non - lapel mike . and , where preferably , also there 's someone sitting next to them who talks a lot . phd f: maybe someone can help me find a good candidate and then i would be willing to phd b: we c what ? maybe the best way to find that would be to look through these . phd b: cuz you can see the seat numbers , and then you can see what type of mike they were using . and so we just look for , somebody sitting next to adam at one of the meetings phd d: it might not be a single person who 's always overlapping that person but any number of people , and , if you align the two hypothesis files across the channels , just word alignment , you 'd be able to find that . so so i that 's a last ther there 're a few things we could do . one is just do like non - lapels if we can get good enough alignments . another one was to try to get somehow align thilo 's energy segmentations with what we have . but then you have the problem of not knowing where the words are because these meetings were done before that segmentation . but maybe there 's something that could be done . phd b: what what is why do you need the , the forced alignment for the hlt for the eurospeech paper ? phd d: , i wanted to just do something not on recognition experiments because that 's ju way too early , but to be able to report , actual numbers . like if we had hand - transcribed pe good alignments or hand - checked alignments , then we could do this paper . it 's not that we need it to be automatic . but without knowing where the real words are , in time phd d: to to an overlap really if it 's really an overlap , or if it 's just a segment correlated with an overlap , phd d: and i that 's the difference to me between like a real paper and a , promissory paper . so , if we d it might be possible to take thilo 's output and like if you have , like right now these meetings are all , phd d: , they 're time - aligned , so if these are two different channels and somebody 's talking here and somebody else is talking here , just that word , if thilo can tell us that there 're boundaries here , we should be able to figure that out because the only thing transcribed in this channel is this word . but , , if there are things phd d: , if you have two and they 're at the edges , it 's like here and here , and there 's speech here , then it does n't really help you , so , phd d: it w it would , but , we exactly where the words are because the transcriber gave us two words in this time bin postdoc a: it 's a merging problem . if you had a if you had a s if you had a script which would postdoc a: i ' ve thought about this , and i ' ve discussed it with thilo , postdoc a: , the , i in principle i could imagine writing a script which would approximate it to some degree , but there is this problem of slippage , grad e: s cuz it seemed like most of the cases are the single word sorts , or at least a single phrase postdoc a: i would n't make that generalization cuz sometimes people will say , " and then i " and there 's a long pause and finish the sentence and sometimes it looks coherent and the it 's not a simple problem . but it 's really and then it 's coupled with the problem that sometimes , with a fricative you might get the beginning of the word cut off and so it 's coupled with the problem that thilo 's is n't perfect either . , we ' ve i th it 's like you have a merging problem plus so merging plus this problem of , not y i if the speech - nonspeech were perfect to begin with , the detector , that would already be an improvement , but that 's impossible , i that 's too much to ask . and so i and may , it 's that there always th there would have to be some hand - tweaking , but it 's possible that a script could be written to merge those two types of things . i ' ve discussed it with thilo and in terms of not him doing it , but we discussed some of the parameters of that and how hard it would be to in principle to write something that would do that . phd d: , i in the future it wo n't be as much as an issue if transcribers are using the tightened boundaries to start with , then we have a good idea of where the forced alignment is constrained to . postdoc a: , it 's just , a matter of we had the revolution of improved , interface , one month too late , postdoc a: so it 's just a matter of , from now on we 'll be able to have things channelized to begin with . grad e: and we 'll just have to see how hard that is . so so whether the corrections take too much time . i was just thinking about the fact that if thilo 's missed these short segments , that might be quite time - consuming for them to insert them . phd d: but he also can adjust this minimum time duration constraint and then what you get is noises mostly , grad e: it might be easier to delete something that 's wrong than to insert something that 's missing . professor c: if you can feel confident that what the , that there 's actually something that you 're not gon na miss something , grad e: cuz then you just delete it , and you do n't have to pick a time . postdoc a: the problem is i it 's a really good question , and i really find it a pain in the neck to delete things because you have to get the mouse up there on the t on the text line and i and otherwise you just use an arrow to get down , i it depends on how lar th there 's so many extra things that would make it one of them harder than the other , or vice versa . it 's not a simple question . but , , in principle , like , if one of them is easier then to bias it towards whichever one 's easier . grad e: , i the semantics are n't clear when you delete a segment , because you would say you would have to determine what the surroundings were . phd d: you could just say it 's a noise , though , and write , a post - processor will just all you have to do is just phd d: or just say it 's just put " x , " , like " not speech " , grad e: but it 's the semantics that are questionable to me , that you delete something so let 's say someone is talking to here , and then you have a little segment here . , is that part of the speech ? is it part of the nonspeech ? , w what do you embed it in ? phd d: there 's something , though , about keeping , and this is probably another discussion , keeping the that thilo 's detector detected as possible speech and just marking it as not speech than deleting it . because then when you align it , then the alignment can you can put a reject model or whatever , grad e: , i see . so then they could just like put that 's what you meant by just put an " x " there . grad e: so so all they so that all they would have to do is put like an " x " there . grad e: so blank for silence , " s for speech , " x for something else . phd d: whatever , that 's actually a better way to do it cuz the a the forced alignment will probably be more consistent than postdoc a: , like , there 's a complication which is that you can have speech and noise in s postdoc a: , on the same channel , the same speaker , so now sometimes you get a ni microphone pop and , , there 're these fuzzy hybrid cases , and then the problem with the boundaries that have to be shifted around . it 's not a simple problem . phd d: anyway , quick question , though , at a high level do people think , let 's just say that we 're moving to this new era of like using the , pre - segmented t , non - synchronous conversations , does it make sense to try to take what we have now , which are the ones that , we have recognition on which are synchronous and not time - tightened , and try to get something out of those for purposes of illustrating the structure and the nature of the meetings , or is it better to just , forget that and tr , it 's grad e: , we 'll have to , eventually . and my hope was that we would be able to use the forced alignment to get it . phd d: is it worth if we ca n't then we can fake it even if we 're we report , we 're wrong twenty percent of the time or ten percent of the time . grad e: , i ' m thinking are you talking about for a paper , or are talking about for the corpus . phd d: actually that 's a good question because we 'd have to completely redo those meetings , and we have like ten of them now . grad e: we would n't have to re - do them , we would just have to edit them . postdoc a: that when brian comes , this 'll be an interesting aspect to ask him as b grad e: , brian . you s you said ryan . and it 's like , " who 's ryan ? " phd d: , no , that 's a good point , though , because for feature extraction like for prosody , the meetings we have now , it 's a good chunk of data we need to get a decent f postdoc a: and that 's what , ever since the february meeting that i transcribed from last year , forced alignment has been on the table as a way of cleaning them up later . postdoc a: and and so i ' m hopeful that 's possible . i know that there 's complication in the overlap sections and with the lapel mikes , phd d: , we might be able , at the very worst , we can get transcribers to correct the cases where , you have a good estimate where these places are because the recognition 's so poor . right ? phd d: so we need some way to push these first chunk of meetings into a state where we get good alignments . phd f: i ' m probably going to spend another day or so trying to improve things by , by using , acoustic adaptation . , the right now i ' m using the unadapted models for the forced alignments , and it 's possible that you get considerably better results if you , manage to adapt the , phone models to the speaker and the reject model to the to all the other speech . , so phd b: could you could you at the same time adapt the reject model to the speech from all the other channels ? phd b: , not just the speech from that of the other people from that channel , but the speech from the a actual other channels . phd d: but what you do wanna do is take the , even if it 's klugey , take the segments the synchronous segments , the ones from the hlt paper , where only that speaker was talking . phd d: use those for adaptation , cuz if you use everything , then you get all the cross - talk in the adaptation , and it 's just blurred . phd d: and that we know , we have that . and it 's about roughly two - thirds , very roughly averaged . that 's not completely negligible . like a third of it is bad for adaptation or so . professor c: i we 're not turning in to eurospeech , a redo of the hlt paper . that i do n't wanna do that , phd d: morgan 's talk went very it woke , it was really a presented and got people laughing grad e: especially the batteried meter popping up , that was hilarious . right when you were talking about that . grad e: he he was onto the bullet points about talking about the little hand - held , and trying to get lower power and so on , grad e: and microsoft pops up a little window saying " your batteries are now fully charged . " grad e: i ' m thinking about scripting that for my talk , put a little script in there to say " your batteries are low " right when i ' m saying that . professor c: no , i in your case , you were joking about it , but , your case the fact that your talking about similar things at a couple of conferences , it 's not these are conferences that have d really different emphases . whereas hlt and eurospeech , pretty similar , so i ca n't see really just putting in the same thing , phd d: the hlt paper is really more of a introduction - to - the - project paper , and , professor c: or some or some , i would see eurospeech if we have some eurospeech papers , these will be paper p , submissions . these will be things that are particular things , aspects of it that we 're looking at , rather than , attempt at a global paper about it . postdoc a: i did go through one of these meetings . i had , one of the transcribers go through and tighten up the bins on one of the , nsa meetings , and then i went through afterwards and double - checked it so that one is really very accurate . i men i mentioned the link . i sent that one ? postdoc a: , i ' m trying to remember i do n't remember the number off hand . postdoc a: it 's one of the nsa 's . i sent email before the conference , before last week . bef - what is wednesday , thursday . postdoc a: i ' m that one 's accurate , i ' ve been through it myself . grad e: , that 's what i was gon na say . the problem with those , they 're all german . phd d: and e and extremely hard to follow , like word - wise , i bet the transcri , i have no idea what they 're talking about , postdoc a: i corrected it for a number of the words . i ' m that , they 're accurate now . phd d: , this is tough for a language model probably but that might be useful just for speech . grad e: , before you l go i it 's alright for you to talk a little without the mike i noticed you adjusting the mike a lot , did it not fit you ? phd b: actually if you have a larger head , that mike 's got ta go farther away which means the balance is gon na make it wanna tip down . grad e: cuz , i ' m just thinking , we were we 're we ' ve been talking about changing the mikes , for a while , and if these are n't acoustically they seem really good , but if they 're not comfortable , we have the same problems we have with these stupid things . postdoc a: it 's com this is the first time i ' ve worn this , i find it very comfortable . grad e: i find it very comfortable too , but , it looked like andreas was having problems , and morgan was saying it grad e: you did wear it this morning ? ok , it 's off , so you can put it on . phd b: i , i do n't want it on , i just want to , say what is a problem with this . if you are wearing this over your ears and you ' ve got it all the way out here , then the balance is gon na want to pull it this way . where as if somebody with a smaller head has it back here , grad e: wh what it 's supposed to do is the backstrap is supposed to be under your crown , and so that should be if it 's right against your head there , which is what it 's supposed to be , that balances it so it does n't slide up . grad e: , right below if you feel the back of your head , you feel a little lump , and so it 's supposed to be right under that . postdoc a: , wonder if it 's if he was wearing it over his hair instead of under his hair . grad e: probably it was it probably just was n't tight enough to the back of his head . , so the directions do talk about bending it to your size , which is not really what we want . phd b: the other thing that would do it would be to hang a five pound weight off the back . grad e: we at boeing i used i was doing augmented reality so they had head - mounts on , and we had a little jury - rigged one with a welder 's helmet , grad e: and we had just a bag with a bunch of marbles in it as a counter - balance . professor c: or maybe this could be helpful just for evening the conversation between people . if people those who talk a lot have to wear heavier weights , and , so , what was i gon na say ? , i was gon na say , i had these , conversations with nist folks also while i was there and , , so they have their plan for a room , with , mikes in the middle of the table , and , close - mounted mikes , and they 're talking about close - mounted and lapels , just cuz professor c: and , like multiple video cameras coverin covering every everybody every place in the room , professor c: , the mikes in the middle , the head - mounted mikes , the lapel mikes , the array , with , there 's some discussion of fifty - nine , professor c: they might go down to fifty - seven because , there is , some pressure from a couple people at the meeting for them to use a kemar head . i forget what kemar , stands for , but what it is it 's dummy head that is very specially designed , and , so what they 're actually doing is they 're really there 's really two recording systems . professor c: so they may not be precisely synchronous , but the but there 's two recording systems , one with , twenty - four channels , and one with sixty - four channels . and the sixty - four channel one is for the array , but they ' ve got some empty channels there , and anyway they like they 're saying they may give up a couple if for the kemar head if they go with that . grad e: , h , j jonathan fiscus did say that , they have lots of software for doing calibration for skew and offset between channels grad e: their their legal issues wo n't allow them to do otherwise . but it sounded like they were pretty thought out grad e: and they 're gon na be real meetings , it 's just that they 're with str with people who would not be meeting otherwise . grad e: it 's just informal . , i also sat and chatted with several of the nist folks . they seemed like a good group . professor c: , we sh we should just have you read it , but , i mea ba i , we ' ve all got these little proceedings , professor c: but , , it was about , , going to a new task where you have insufficient data and using data from something else , and adapting , and how that works . , so it was pretty related to what liz and andreas did , except that this was not with meeting , it was with , like they s did n't they start off with broadcast news system ? and then they went to grad e: the - their broadcast news was their acoustic models and then all the other tasks were much simpler . so they were command and control and that thing . grad e: that it not only works , in some cases it was better , which was pretty interesting , but that 's cuz they did n't control for parameters . phd b: did they ever try going the other direction from simpler task to more complicated tasks , grad e: , one of the big problems with that is often the simpler task is n't fully does n't have all the phones in it , and that makes it very hard . but i ' ve done the same thing . i ' ve been using broadcast news nets for digits , like for the spr speech proxy thing that i did ? that 's what i did . so . it works . professor c: , we should probably what would actually what we should do , i have n't said anything about this , but probably the five of us should pick out a paper or two that , , got our interest , and we should go around the room at one of the tuesday lunch meetings and say , what was good about the conference , phd d: , the summarization was interesting , i anything about that field , but for this proposal on meeting summarization , , it 's a far cry because they were n't working with meeting type data , but he got an overview on some of the different approaches , phd d: but , there 's that 's a huge field and probably the groups there may not be representative of the field , i exactly that everyone submits to this particular conference , phd d: yet there was , let 's see , this was on the last day , mitre , bbn , and , prager phd d: this was wednesday morning . the sentence ordering one , was that barselou , and these guys ? phd d: anyway , i it 's in the program , i should have read it to remind myself , but that 's useful and like when mari and katrin and jeff are here it 'd be good to figure out some kinds of things that we can start doing maybe just on the transcripts cuz we already have postdoc a: , i like the idea that adam had of , z maybe generating minutes based on some of these things that we have because it would be easy to do that just , it has to be , though , someone from this group because of the technical nature of the thing . grad e: someone who actually does take notes , i ' m very bad at note - taking . phd d: but what 's interesting is there 's all these different evaluations , like just , how do you evaluate whether the summary is good or not , phd d: and that 's what 's was interesting to me is that there 's different ways to do it , grad e: and as i said , i like the microsoft talk on scaling issues in , word sense disambiguation , that was interesting . grad e: it it was the only one it was the only one that had any real disagreement about . professor c: , i did n't have as much disagreement as i would have liked , but i did n't wanna i wouldn i did n't wanna get into it because , , it was the application was one i did n't know anything about , it just would have been , me getting up to be argumentative , but , , the missing thi so what they were saying it 's one of these things is , all you need is more data , but i mea i wh it @ that 's dissing it , improperly , it was a study . , they were doing this it was n't word - sense disambiguation , it was grad e: but it was a very simple case of " to " versus " too " versus " two " and " there " , " their " , " they 're " phd d: and there and their and that you could do better with more data , that 's clearly statistically professor c: and so , what they did was they had these different kinds of learning machines , and they had different amounts of data , and so they did like , eight different methods that everybody , , argues about , " my learning machine is better than your learning machine . " and , they were started off with a million words that they used , which was evidently a number that a lot of people doing that particular task had been using . so they went up , being microsoft , they went up to a billion . and then they had this log scale showing a , and naturally everything gets professor c: they , it 's a big company , i did n't mean it as a ne anything negative , but i grad e: , the reason they can do that , is that they assumed that text that they get off the web , like from wall street journal , is correct , and edit it . so that 's what they used as training data . it 's just saying if it 's in this corpus it 's correct . professor c: but , yes . there was the effect that , one would expect that you got better and better performance with more and more data . , but the real point was that the different learning machines are all over the place , and by going up significantly in data you can have much bigger effect then by switching learning machines and furthermore which learning machine was on top depended on where you were in this picture , so , professor c: could be . so so , that was , it 's a good point , but the problem i had with it was that the implications out of this was that , the choices you make about learning machines were therefore irrelevant which is not at n t as for as i know in tasks i ' m more familiar with @ is not true . what i what is true is that different learning machines have different properties , and you wanna those properties are . and someone else implied that we s , a all the study of learning machine we still what those properties are . we them perfectly , but we know that some kinds use more memory and some other kinds use more computation and some are hav have limited discrimination , but are just easy to use , and others are phd b: but does n't their conclusion just you could have guessed that before they even started ? because if you assume that these learning things get better and better , phd b: then as you approach there 's a point where you ca n't get any better , you get everything right . grad e: no , but there was still a spread . they were n't all up they were n't converging . phd b: but what i ' m saying is that th they have to , as they all get better , they have to get closer together . grad e: they were all still spread . but they right , right . but they had n't even come close to that point . all the tasks were still improving when they hit a billion . phd b: but they 're all going the same way , so you have to get closer . professor c: that 's getting cl , the spread was still pretty wide that 's th that 's true , but , it would be irntu intu intuition that this would be the case , but , to really see it and to have the intuition is quite different , somebody w let 's see who was talking about earlier that the effect of having a lot more data is quite different in switchboard than it is in broadcast news , phd d: so it depends a lot on whether , it disambiguation is exactly the case where more data is better , you 're you can assume similar distributions , but if you wanted to do disambiguation on a different type of , test data then your training data , then that extra data would n't generalize , grad e: but , one of their p they they had a couple points . w , one of them was that " , maybe simpler algorithms and more data are is better " . less memory , faster operation , simpler . because their simplest , most brain - dead algorithm did pretty darn when you got gave it a lot more data . and then also they were saying , " , m you have access to a lot more data . why are you sticking with a million words ? " , their point was that this million - word corpus that everyone uses is ten or fifteen years old . and everyone is still using it , professor c: but anyway , i it 's just the i it 's not really the conclusion they came to so much , as the conclusion that some of the , commenters in the crowd came up with professor c: that , , this therefore is further evidence that , more data is really all you should care about , and that was just going too far the other way , professor c: and the , one person ga g got up and made a brief defense , but it was a different grounds , it was that , i w the reason people were not using so much data before was not because they were stupid or did n't realize data was important , but th they did n't have it available . , but the other point to make a again is that , machine learning still does matter , but it matters more in some situations than in others , and it and also there 's not just mattering or not mattering , but there 's mattering in different ways . , you might be in some situation where you care how much memory you 're using , or you care , what recall time is , or you care , and phd d: or done another language , or , you so there 's papers on portability and rapid prototyping and blah - blah , and then there 's people saying , " , just add more data . " professor c: so these , th the in the speech side , the thing that @ always occurs to me is that if you one person has a system that requires ten thousand hours to train on , and the other only requires a hundred , and they both do about the same because the hundred hour one was smarter , that 's gon na be better . because people , there is n't gon na be just one system that people train on and then that 's it for the r for all of time . , people are gon na be doing other different things , and so it these things matters matter . postdoc a: so , this was a very provocative slide . she put this up , and it was like this is this p people kept saying , " can i see that slide again ? " and then they 'd make a comment , and one person said , - known person said , , " before you dismiss forty - five years including my work " phd d: but th , the same thing has happened in computational linguistics , you look at the acl papers coming out , and now there 's a turn back towards , ok we ' ve learned statistic , we 're getting what we expect out of some statistical methods , the there 's arguments on both sides , grad e: is that all of them are based on all the others , just , you ca n't say grad e: , so . and i ' m saying the same thing happened with speech recognition , for a long time people were hand - c coding linguistic rules and then they discovered machine - learning worked better . and now they 're throwing more and more data and worrying perhaps worrying less and less about , the exact details of the algorithms . grad e: shall we read some digits ? are we gon na do one at a time ? or should we read them all agai at once again . professor c: let 's do it all at once . we @ let 's try that again . grad e: so remember to read the transcript number so that , everyone knows that what it is . and ready ? three , two , one . ###summary: the berkeley meeting recorder group discussed the preparation of a data sample for ibm , the manual adjustment of time bins by transcribers , recognition results for a test set of digits data , and forced alignments. participants also talked about eurospeech 2001 submissions , and exchanged comments on the proceedings of the recently attended human language technologies conference ( hlt'01 ). preliminary recognition results were presented for a subset of digits data. efforts to deal with cross-talk and improve forced alignments for non-digits data were also discussed. subsequent manual adjustment of speech and non-speech boundaries will be delegated to the transcriber pool. a subset of meeting recorder data will be prepared ( i.e . pre-segmented and manually adjusted ) for delivery to ibm. the transcriber interface may require modifications if it becomes necessary for transcribers to quickly switch among waveform displays. transcribers risk overlooking speech that is deeply embedded in the mixed signal. should transcriptions be derived from each of the close-talking channels or from the mixed signal alone? the pre-segmentation tool does not perform well on short utterances , e.g . backchannels. the transcriber interface does not allow the user to quickly switch among visual displays , i.e . multi-channel waveforms. forced alignments were problematic for non-digits data due to cross-talk. this problem was reported to be particularly bad for cross-talk featuring more than one word. echo cancellation was considered as a means of improving forced alignments , but was ultimately deemed to be too time-consuming given the dynamic aspect of adapting distances between speakers. comparing error rates in terms of the recording device used , i.e . lapel versus wireless microphones , is tedious. deleting segments of the recordings is expected to be very time-consuming for transcribers. more results are needed for generating adequate submissions for eurospeech'01. participants have complained that the head-mounted microphone is uncomfortable. one meeting recording has been channelized and pre-segmented for delivery to ibm. a sample of digits data is being prepared for ibm. preliminary recognition results were obtained for a subset of digits data. the error rate distribution was multimodal , reflecting differences in performance for native versus non-native speakers , and also possible pre-processing errors. future efforts will involve an attempt to get good forced alignments on digits data and generate a report for eurospeech'01. a program has been developed for replacing sections of recorded speech with editing bleeps. the tightening of time bins for one nsa meeting was checked and judged to be highly accurate. efforts are ongoing to improve forced alignments for a subset of non-digits data , including acoustic adaptation manipulations.
8
professor b: ok so today we 're looking at a number of things we 're trying and fortunately for listeners to this we lost some of it 's visual but got tables in front of us . what is what does combo mean ? phd c: so combo is a system where we have these features that go through a network and then this same string of features but low - pass filtered with the low - pass filter used in the msg features . and so these low - pass filtered goes through m another mlp and then the linear output of these two mlp 's are combined just by adding the values and then there is this klt . the output is used as features as . professor b: so let me try to restate this and see if i have it right . there is there is the features there 's the ogi features and then those features go through a contextual l let 's take this bottom arr one pointed to by the bottom arrow . those features go through a contextualized klt . then these features also get low - pass filtered phd c: the first is a klt using several frames of the features . the second path is mlp also using nine frames several frames of features phd c: adding the outputs just like in the second propose the proposal from for the first evaluation . professor b: and so and then the one at the top and i presume these things that are in yellow because overall they 're the best ? professor b: let 's focus on them then so what 's the block diagram for the one above it ? phd c: for the f first yellow line you mean ? so it 's s the same except that we do n't have this low - pass filtering so we have only two streams . professor b: do you e they mentioned made some when i was on the phone with sunil they mentioned some weighting scheme that was used to evaluate all of these numbers . phd c: actually the way things seems to it 's forty percent for ti - digit , sixty for all the speechdat - cars , all these languages . ehm the match is forty , medium thirty five and high mismatch twenty - five . phd c: but . generally what you observe with ti - digits is that the result are very close whatever the system . professor b: and so have you put all these numbers together into a single number representing that ? not professor b: and how does this compare to the numbers so ogi two is just the top row ? phd c: so to actually ogi two is the baseline with the ogi features but this is not exactly the result that they have because they ' ve they 're still made some changes in the features and but actually our results are better than their results . i by how much because they did not send us the new results professor b: ok so the one place where it looks like we 're messing things up a bit is in the highly mismatched italian . an phd c: there is something funny happening here because but there are thirty - six and then sometimes we are around forty - two and professor b: so one of the ideas that you had mentioned last time was having a second silence detection . phd c: filt , so it seems f for the match and mismatched condition it 's it brings something . but actually there are there 's no room left for any silence detector at the server side because of the delay . phd c: except i because they are still working . t two days ago they were still working on this trying to reduce the delay of the silence detector so but if we had time perhaps we could try to find some compromise between the delay that 's on the handset and on the server side . perhaps try to reduce the delay on the handset and but for the moment they have this large delay on the feature computation and so we do n't professor b: alright so for now at least that 's not there you have some results with low - pass filter cepstrum does n't have a huge effect but it looks like it maybe could help in a couple places . professor b: little bit . and let 's see what else did we have in there ? i it makes a l at this point this is i should probably look at these others a little bit and you yellowed these out but i see that one you ca n't use because of the delay . those look pretty good . let 's see that one even the just the second row does n't look that bad right ? that 's just and and that looks like an interesting one too . phd c: actually the second line is like the first line in yellow except that we do n't have this klt on the first on the left part of the diagram . we just have the features as they are . professor b: so when we do this weighted measure we should compare the two cuz it might even come out better . and it 's a little slightly simpler . professor b: so so there 's so i would put that one also as a maybe . and it 's actually does significantly better on the highly mismatched italian , so s and little worse on the mis on the mm case , it 's worse than a few things so let 's see how that c see how that comes out on their measure and are we running this for ti - digits or now is ti di is that part of the result that they get for the development th the results that they 're supposed to get at the end of the month , the ti - digits are there also ? professor b: and see what else there is here . i see the one i was looking down here at the o the row below the lower yellowed one . that 's with the reduced klt size reduced dimensionality . professor b: what happens there is it 's around the same and so you could reduce the dimension as you were saying before a bit perhaps . professor b: but it is little . not by a huge amount , i . what are what are the sizes of any of these sets , i ' m you told me before , but i ' ve forgotten . so how many words are in one of these test sets ? phd c: it 's it depends the matched is generally larger than the other sets and it 's around two thousand or three thousand words perhaps , at least . professor b: so the so the sets so the test sets are between five hundred and two thousand sentences , let 's say and each sentence on the average has four or five digits or is it most of them longer or phd d: but sometime the sentence have only one digit and sometime like the number of credit cards , something like that . professor b: right , so between one and sixteen . see the reason i ' m asking is we have all these small differences and i how to take them , right ? so i if you had just to give an example , if you had a thousand words then a tenth of a percent would just be one word , right ? so so it would n't mean anything . so it be i 'd like to the sizes of these test sets were actually . professor b: right . so these are word error rates so this is on how many words . phd d: we have the result that the output of the htk the number of sentences , no it 's the number is n't . professor b: so anyway if you could just mail out what those numbers are and then that be great . what else is there here ? see the second from the bottom it says sil , but this is some different silence or thing or what was that ? phd d: it 's only one small experiment to happened . to apply also to in include also the silence of the mlp we have the fifty - six form and the silence to pick up the silence and we include those . professor b: yes . - , - . the silence plus the klt output ? so you 're only using the silence . phd c: it is because we just keep we do n't keep all the dimensions after the klt phd c: so we try to add the silence also in addition to the these twenty - eight dimensions . professor b: i see . and what and what 's ogi forty - five ? the bottom one there ? phd c: it 's o it 's ogi two , it 's so the th it 's the features from the first line professor b: that was the one that was the second row . so what 's the difference between the second professor b: ok , so alright so it looks to me i the same given that we have to take the filt ones out of the running because of this delay problem so it looks to me like the ones you said i agree are the ones to look at but would add the second row one and then if we can also when they 're using this weighting scheme of forty , thirty - five , twenty - five is that on the percentages or on the raw errors ? i it 's probably on the percentages right ? professor b: maybe maybe they 'll argue about it . so if we can how many words are in each and then dave promised to get us something tomorrow which will be there as far as they ' ve gotten friday and then we 'll operate with that and how long did it i if we 're not doing all these things if we 're only doing i since this is development data it 's legitimate to do more than one , ordinarily if in final test data you do n't want to do several and take the best that 's not proper but if this is development data we could still look at a couple . phd c: we can but we have to decide we have to fix the system on this d on this data , to choose the best professor b: right so maybe what we do is we as soon as we get the data from them we start the training and but we start the write - up right away because as you say there 's only minor differences between these . professor b: , and i would , i would i 'd like to see it maybe i can edit it a bit the my what in this si i in this situation is my forte which is english . so h . have y have you seen alt d do they have a format for how they want the system descriptions or anything ? professor b: yes , for those who are listening to this and not looking at it 's not really that impressive , it 's just tiny . it 's all these little categories set a , set b , set c , multi - condition , clean . no mitigation . do what no mitigation means here ? professor b: this is i right , it says right above here channel error resilience , so recognition performance is just the top part , actually . and they have yes , split between seen databases and non - seen so between development and evaluation . and so right , it 's presumed there 's all sorts of tuning that 's gone on the see what they call seen databases and there wo n't be tuning for the unseen . multi - condition multi - condition . so they have looks like they have so they splitting up between the ti - digits and everything else , i see . so the everything else is the speechdat - car , that 's the multi multilingual professor b: it is , but there 's also there 's these tables over here for the ti - digits and these tables over here for the car data which is i all the multilingual and then there 's they also split up between multi - condition and clean only . phd c: for ti - digits . , actually . for the ti - digits they want to train on clean and on noisy phd c: but we actually do we have the features ? for the clean ti - digits but we did not test it yet . the clean training . professor b: anyway , sounds like there 'll be a lot to do just to work with our partners to fill out the tables over the next few days professor b: i they have to send it out let 's see the thirty - first is wednesday and the it has to be there by some hour european time on wednesday phd d: we lost time wednesday maybe because that the difference in the time may be is a long different of the time . phd d: maybe the thursday the twelfth of the night of the thurs - thirty - one is not valid in europe . we is happening . phd c: , . except if it 's the thirty - one at midnight or i we can still do some work on wednesday morning . professor b: w i is but is it midni it was actually something like five pm on was like it was five pm , i did n't think it was midnight . they said they wanted everything by phd c: no , we are wondering about the hour that we have to i if it 's three pm it 's professor b: yes , yes , but i did n't think it was midnight that it was due , it was due at some hour during the day like five pm . professor b: so i we should look but my assumption is that we have to be done tuesday . so then next thursday we can have a little aftermath but then we 'll actually have the new data which is the german and the danish but that really will be much less work because the system will be fixed so all we 'll do is take whatever they have and run it through the process . we wo n't be changing the training on anything so there 'll be no new training , there 'll just be new htk runs , so that 's means in some sense we can relax from this after tuesday and maybe next meeting we can start talking a little bit about where we want to go from here in terms of the research . what things did you think of when you were doing this process that you just did n't really have time to adequately work on what ? grad a: , stephane always has these great ideas and , but we do n't have time . professor b: but they 're ideas . , that was good . and and also it 's still true that it 's true that we at least got fairly consistent i improved results by running the neural net transformation in parallel with the features professor b: rather than in sequence which was your suggestion and that seems to have been borne out . the fact that none of these are , enormous is not too surprising most improvements are n't enormous some of them are but you have something really wrong and you fix it you can get big and really enormous improvements but cuz our best improvements over the years that we ' ve gotten from finding bugs , anyway i see where we are and everybody knows what they 're doing and is there anything else we should talk about or are we done ? phd c: it 's ok . we so we will we 'll try to focus on these three architectures and perhaps i was thinking also a fourth one with just a single klt because we did not really test that removing all these klt 's and putting one single klt at the end . professor b: , that would be pretty low maintenance to try it . if you can fit it in . i have i do have one other piece of information which i should tell people outside of this group too i if we 're gon na need it but jeff up at the university of washington has gotten a hold of a some server farm of ten multiprocessor ibm machines rs six thousands and so each one is four processors or i , eight hundred megahertz and there 's four processors in a box and there 's ten boxes and there 's some ti so if he 's got a lot of processing power we 'd have to schedule it but if we have some big jobs and we wanna run them he 's offering it . so . it 's when he was here he used i not only every machine here but every machine on campus as far as i could tell , so in some ways he just got his payback , but again i if we 'll end up with if we 're gon na be cpu limited on anything that we 're doing in this group but if we are that 's an offer . you guys doing great so that 's really neat we 'll g do n't think we need to the other thing i that i will say is that the digits that we 're gon na record momentarily is starting to get are starting to get into a pretty good size collection and in addition to the speechdat we will have those to work with really pretty soon now so that 's another source of data . which is s under somewhat better control and that we can make measurements of the room the that if we feel there 's other measurements we do n't have that we 'd like to have we can make them dave and i were just talking about that a little while ago so that 's another possibility for this work . k , if nobody has anything else maybe we should go around do our digits duty . ok i 'll start . , let me say that again . ok . i we 're done .
the icsi meeting recorder group at berkeley are approaching an important milestone on their project. they discussed most recent results , finalized plans to continue and discussed the work required and timing needed for completion of this stage of the project. mn007 and fn002 need to find a way of combining result figures into one easily comparable way of judging performance. they were also asked to mail round numbers detailing the size of their test-sets , so that group members can assess the seriousness of figures such as word error rates. there are now just 4 architectures that will be carried forward for testing. though the final system is not due to be fixed , until tuesday , writing it up must begin sooner. anything written must go through me013 for editing. discussion on future work once this stage is out of the way will be held at the next meeting. while using a second silence detection system it was found that while providing some improvement , it added too great a delay on the server side of the system. the experiment team have been narrowing down their experiments , coming close to fixing their system. partner ogi have been using a weighting scheme , but the icsi group do not yet have all the parts to run a similar system. they have been able to come close to the ogi results , but since ogi have changed something they do differ , though for the better. so far , testing on italian is the worst performing condition. with regards to the digits the group have been recording during meetings , the collection of data is growing , and will soon be a workable size.
###dialogue: professor b: ok so today we 're looking at a number of things we 're trying and fortunately for listeners to this we lost some of it 's visual but got tables in front of us . what is what does combo mean ? phd c: so combo is a system where we have these features that go through a network and then this same string of features but low - pass filtered with the low - pass filter used in the msg features . and so these low - pass filtered goes through m another mlp and then the linear output of these two mlp 's are combined just by adding the values and then there is this klt . the output is used as features as . professor b: so let me try to restate this and see if i have it right . there is there is the features there 's the ogi features and then those features go through a contextual l let 's take this bottom arr one pointed to by the bottom arrow . those features go through a contextualized klt . then these features also get low - pass filtered phd c: the first is a klt using several frames of the features . the second path is mlp also using nine frames several frames of features phd c: adding the outputs just like in the second propose the proposal from for the first evaluation . professor b: and so and then the one at the top and i presume these things that are in yellow because overall they 're the best ? professor b: let 's focus on them then so what 's the block diagram for the one above it ? phd c: for the f first yellow line you mean ? so it 's s the same except that we do n't have this low - pass filtering so we have only two streams . professor b: do you e they mentioned made some when i was on the phone with sunil they mentioned some weighting scheme that was used to evaluate all of these numbers . phd c: actually the way things seems to it 's forty percent for ti - digit , sixty for all the speechdat - cars , all these languages . ehm the match is forty , medium thirty five and high mismatch twenty - five . phd c: but . generally what you observe with ti - digits is that the result are very close whatever the system . professor b: and so have you put all these numbers together into a single number representing that ? not professor b: and how does this compare to the numbers so ogi two is just the top row ? phd c: so to actually ogi two is the baseline with the ogi features but this is not exactly the result that they have because they ' ve they 're still made some changes in the features and but actually our results are better than their results . i by how much because they did not send us the new results professor b: ok so the one place where it looks like we 're messing things up a bit is in the highly mismatched italian . an phd c: there is something funny happening here because but there are thirty - six and then sometimes we are around forty - two and professor b: so one of the ideas that you had mentioned last time was having a second silence detection . phd c: filt , so it seems f for the match and mismatched condition it 's it brings something . but actually there are there 's no room left for any silence detector at the server side because of the delay . phd c: except i because they are still working . t two days ago they were still working on this trying to reduce the delay of the silence detector so but if we had time perhaps we could try to find some compromise between the delay that 's on the handset and on the server side . perhaps try to reduce the delay on the handset and but for the moment they have this large delay on the feature computation and so we do n't professor b: alright so for now at least that 's not there you have some results with low - pass filter cepstrum does n't have a huge effect but it looks like it maybe could help in a couple places . professor b: little bit . and let 's see what else did we have in there ? i it makes a l at this point this is i should probably look at these others a little bit and you yellowed these out but i see that one you ca n't use because of the delay . those look pretty good . let 's see that one even the just the second row does n't look that bad right ? that 's just and and that looks like an interesting one too . phd c: actually the second line is like the first line in yellow except that we do n't have this klt on the first on the left part of the diagram . we just have the features as they are . professor b: so when we do this weighted measure we should compare the two cuz it might even come out better . and it 's a little slightly simpler . professor b: so so there 's so i would put that one also as a maybe . and it 's actually does significantly better on the highly mismatched italian , so s and little worse on the mis on the mm case , it 's worse than a few things so let 's see how that c see how that comes out on their measure and are we running this for ti - digits or now is ti di is that part of the result that they get for the development th the results that they 're supposed to get at the end of the month , the ti - digits are there also ? professor b: and see what else there is here . i see the one i was looking down here at the o the row below the lower yellowed one . that 's with the reduced klt size reduced dimensionality . professor b: what happens there is it 's around the same and so you could reduce the dimension as you were saying before a bit perhaps . professor b: but it is little . not by a huge amount , i . what are what are the sizes of any of these sets , i ' m you told me before , but i ' ve forgotten . so how many words are in one of these test sets ? phd c: it 's it depends the matched is generally larger than the other sets and it 's around two thousand or three thousand words perhaps , at least . professor b: so the so the sets so the test sets are between five hundred and two thousand sentences , let 's say and each sentence on the average has four or five digits or is it most of them longer or phd d: but sometime the sentence have only one digit and sometime like the number of credit cards , something like that . professor b: right , so between one and sixteen . see the reason i ' m asking is we have all these small differences and i how to take them , right ? so i if you had just to give an example , if you had a thousand words then a tenth of a percent would just be one word , right ? so so it would n't mean anything . so it be i 'd like to the sizes of these test sets were actually . professor b: right . so these are word error rates so this is on how many words . phd d: we have the result that the output of the htk the number of sentences , no it 's the number is n't . professor b: so anyway if you could just mail out what those numbers are and then that be great . what else is there here ? see the second from the bottom it says sil , but this is some different silence or thing or what was that ? phd d: it 's only one small experiment to happened . to apply also to in include also the silence of the mlp we have the fifty - six form and the silence to pick up the silence and we include those . professor b: yes . - , - . the silence plus the klt output ? so you 're only using the silence . phd c: it is because we just keep we do n't keep all the dimensions after the klt phd c: so we try to add the silence also in addition to the these twenty - eight dimensions . professor b: i see . and what and what 's ogi forty - five ? the bottom one there ? phd c: it 's o it 's ogi two , it 's so the th it 's the features from the first line professor b: that was the one that was the second row . so what 's the difference between the second professor b: ok , so alright so it looks to me i the same given that we have to take the filt ones out of the running because of this delay problem so it looks to me like the ones you said i agree are the ones to look at but would add the second row one and then if we can also when they 're using this weighting scheme of forty , thirty - five , twenty - five is that on the percentages or on the raw errors ? i it 's probably on the percentages right ? professor b: maybe maybe they 'll argue about it . so if we can how many words are in each and then dave promised to get us something tomorrow which will be there as far as they ' ve gotten friday and then we 'll operate with that and how long did it i if we 're not doing all these things if we 're only doing i since this is development data it 's legitimate to do more than one , ordinarily if in final test data you do n't want to do several and take the best that 's not proper but if this is development data we could still look at a couple . phd c: we can but we have to decide we have to fix the system on this d on this data , to choose the best professor b: right so maybe what we do is we as soon as we get the data from them we start the training and but we start the write - up right away because as you say there 's only minor differences between these . professor b: , and i would , i would i 'd like to see it maybe i can edit it a bit the my what in this si i in this situation is my forte which is english . so h . have y have you seen alt d do they have a format for how they want the system descriptions or anything ? professor b: yes , for those who are listening to this and not looking at it 's not really that impressive , it 's just tiny . it 's all these little categories set a , set b , set c , multi - condition , clean . no mitigation . do what no mitigation means here ? professor b: this is i right , it says right above here channel error resilience , so recognition performance is just the top part , actually . and they have yes , split between seen databases and non - seen so between development and evaluation . and so right , it 's presumed there 's all sorts of tuning that 's gone on the see what they call seen databases and there wo n't be tuning for the unseen . multi - condition multi - condition . so they have looks like they have so they splitting up between the ti - digits and everything else , i see . so the everything else is the speechdat - car , that 's the multi multilingual professor b: it is , but there 's also there 's these tables over here for the ti - digits and these tables over here for the car data which is i all the multilingual and then there 's they also split up between multi - condition and clean only . phd c: for ti - digits . , actually . for the ti - digits they want to train on clean and on noisy phd c: but we actually do we have the features ? for the clean ti - digits but we did not test it yet . the clean training . professor b: anyway , sounds like there 'll be a lot to do just to work with our partners to fill out the tables over the next few days professor b: i they have to send it out let 's see the thirty - first is wednesday and the it has to be there by some hour european time on wednesday phd d: we lost time wednesday maybe because that the difference in the time may be is a long different of the time . phd d: maybe the thursday the twelfth of the night of the thurs - thirty - one is not valid in europe . we is happening . phd c: , . except if it 's the thirty - one at midnight or i we can still do some work on wednesday morning . professor b: w i is but is it midni it was actually something like five pm on was like it was five pm , i did n't think it was midnight . they said they wanted everything by phd c: no , we are wondering about the hour that we have to i if it 's three pm it 's professor b: yes , yes , but i did n't think it was midnight that it was due , it was due at some hour during the day like five pm . professor b: so i we should look but my assumption is that we have to be done tuesday . so then next thursday we can have a little aftermath but then we 'll actually have the new data which is the german and the danish but that really will be much less work because the system will be fixed so all we 'll do is take whatever they have and run it through the process . we wo n't be changing the training on anything so there 'll be no new training , there 'll just be new htk runs , so that 's means in some sense we can relax from this after tuesday and maybe next meeting we can start talking a little bit about where we want to go from here in terms of the research . what things did you think of when you were doing this process that you just did n't really have time to adequately work on what ? grad a: , stephane always has these great ideas and , but we do n't have time . professor b: but they 're ideas . , that was good . and and also it 's still true that it 's true that we at least got fairly consistent i improved results by running the neural net transformation in parallel with the features professor b: rather than in sequence which was your suggestion and that seems to have been borne out . the fact that none of these are , enormous is not too surprising most improvements are n't enormous some of them are but you have something really wrong and you fix it you can get big and really enormous improvements but cuz our best improvements over the years that we ' ve gotten from finding bugs , anyway i see where we are and everybody knows what they 're doing and is there anything else we should talk about or are we done ? phd c: it 's ok . we so we will we 'll try to focus on these three architectures and perhaps i was thinking also a fourth one with just a single klt because we did not really test that removing all these klt 's and putting one single klt at the end . professor b: , that would be pretty low maintenance to try it . if you can fit it in . i have i do have one other piece of information which i should tell people outside of this group too i if we 're gon na need it but jeff up at the university of washington has gotten a hold of a some server farm of ten multiprocessor ibm machines rs six thousands and so each one is four processors or i , eight hundred megahertz and there 's four processors in a box and there 's ten boxes and there 's some ti so if he 's got a lot of processing power we 'd have to schedule it but if we have some big jobs and we wanna run them he 's offering it . so . it 's when he was here he used i not only every machine here but every machine on campus as far as i could tell , so in some ways he just got his payback , but again i if we 'll end up with if we 're gon na be cpu limited on anything that we 're doing in this group but if we are that 's an offer . you guys doing great so that 's really neat we 'll g do n't think we need to the other thing i that i will say is that the digits that we 're gon na record momentarily is starting to get are starting to get into a pretty good size collection and in addition to the speechdat we will have those to work with really pretty soon now so that 's another source of data . which is s under somewhat better control and that we can make measurements of the room the that if we feel there 's other measurements we do n't have that we 'd like to have we can make them dave and i were just talking about that a little while ago so that 's another possibility for this work . k , if nobody has anything else maybe we should go around do our digits duty . ok i 'll start . , let me say that again . ok . i we 're done . ###summary: the icsi meeting recorder group at berkeley are approaching an important milestone on their project. they discussed most recent results , finalized plans to continue and discussed the work required and timing needed for completion of this stage of the project. mn007 and fn002 need to find a way of combining result figures into one easily comparable way of judging performance. they were also asked to mail round numbers detailing the size of their test-sets , so that group members can assess the seriousness of figures such as word error rates. there are now just 4 architectures that will be carried forward for testing. though the final system is not due to be fixed , until tuesday , writing it up must begin sooner. anything written must go through me013 for editing. discussion on future work once this stage is out of the way will be held at the next meeting. while using a second silence detection system it was found that while providing some improvement , it added too great a delay on the server side of the system. the experiment team have been narrowing down their experiments , coming close to fixing their system. partner ogi have been using a weighting scheme , but the icsi group do not yet have all the parts to run a similar system. they have been able to come close to the ogi results , but since ogi have changed something they do differ , though for the better. so far , testing on italian is the worst performing condition. with regards to the digits the group have been recording during meetings , the collection of data is growing , and will soon be a workable size.
25
phd d: , i will try to explain the thing that i did this week during this week . that i work i begin to work with a new feature to detect voice - unvoice . phd d: what i trying two mlp to the with this new feature and the fifteen feature from the bus base system phd d: and i ' m trying two mlp , one that only have t three output , voice , unvoice , and silence , phd d: and other one that have fifty - six output . the probabilities of the allophone . and i tried to do some experiment of recognition with that and only have result with the mlp with the three output . and i put together the fifteen features and the three mlp output . and , the result are li a little bit better , but more or less similar . professor c: , i ' m slightly confused . what what feeds the three - output net ? phd d: the feature the input ? the inputs are the fifteen bases feature . the with the new code . and the other three features are r , the variance of the difference between the two spectrum , the variance of the auto - correlation function , except the first point , because half the height value is r - zero and also r - zero , the first coefficient of the auto - correlation function . that is like the energy with these three feature , professor c: you would n't do like r - one over r - zero like that ? usually for voiced - unvoiced you 'd do , you 'd do something you 'd do energy but then you have something like spectral slope , which is you get like r - one ov over r - zero like that . phd d: no , r c no . auto - correlation ? yes , yes , the variance of the auto - correlation function that uses that professor c: ye - that 's the variance , but if you just say " what is " , to first order , one of the differences between voiced , unvoiced and silence is energy . another one is but the other one is the spectral shape . professor c: , and so r - one over r - zero is what you typically use for that . professor c: see , because it because this is just like a single number to tell you " does the spectrum look like that or does it look like that " . professor c: right ? so if it 's if it 's low energy but the spectrum looks like that or like that , it 's probably silence . but if it 's low energy and the spectrum looks like that , it 's probably unvoiced . so if you just had to pick two features to determine voiced - unvoiced , you 'd pick something about the spectrum like r - one over r - zero , and r - zero professor c: or i you 'd have some other energy measure and like in the old days people did like zero crossing counts . phd d: , also th use this . bec - because the result are a little bit better but we have in a point that everything is more or less the similar more or less similar . professor c: right , but it seemed to me that what you were getting at before was that there is something about the difference between the original signal or the original fft and with the filter which is what and the variance was one take on it . professor c: but it could be something else . suppose you did n't have anything like that . then in that case , if you have two nets , alright , and this one has three outputs , and this one has f whatever , fifty - six , if you were to sum up the probabilities for the voiced and for the unvoiced and for the silence here , we ' ve found in the past you 'll do better at voiced - unvoiced - silence than you do with this one . so just having the three output thing does n't really buy you anything . the issue is what you feed it . phd e: so you 're saying take the features that go into the voiced - unvoiced - silence net and feed those into the other one , as additional inputs , rather than having a separate professor c: w w that 's another way . that was n't what i was saying but that 's certainly another thing to do . no i was just trying to say if you b if you bring this into the picture over this , what more does it buy you ? professor c: and what i was saying is that the only thing that it buys you is based on whether you feed it something different . and something different in some fundamental way . and so the thing that she was talking about before , was looking at something ab something about the difference between the log fft log power and the log magnitude f - spectrum and the filter bank . and so the filter bank is chosen to integrate out the effects of pitch and she 's saying trying so the particular measure that she chose was the variance of this m of this difference , but that might not be the right number . professor c: maybe there 's something about the variance that 's not enough or maybe there 's something else that one could use , but that , for me , the thing that struck me was that you wanna get something back here , so here 's an idea . what about it you skip all the really clever things , and just fed the log magnitude spectrum into this ? professor c: this is f you have the log magnitude spectrum , and you were looking at that and the difference between the filter bank and c computing the variance . that 's a clever thing to do . what if you stopped being clever ? and you just took this thing in here because it 's a neural net and neural nets are wonderful and figure out what they can what they most need from things , and that 's what they 're good at . so you 're trying to be clever and say what 's the statistic that should we should get about this difference but , maybe just feeding this in or feeding both of them in , another way , saying let it figure out what 's the what is the interaction , especially if you do this over multiple frames ? then you have this over time , and both kinds of measures and you might get something better . professor c: that 's another thing you could do , it seems to me , if you have exactly the right thing then it 's better to do it without the net because otherwise you 're asking the net to learn this , say if you wanted to learn how to do multiplication . you could feed it a bunch of s you could feed two numbers that you wanted to multiply into a net and have a bunch of nonlinearities in the middle and train it to get the product of the output and it would work . but , it 's crazy , cuz we know how to multiply and you 'd be much lower error usually if you just multiplied it out . but suppose you do n't really the right thing is . and that 's what these dumb machine learning methods are good at . so . anyway . it 's just a thought . phd e: how long does it take , carmen , to train up one of these nets ? phd d: the accuracy . no for , yes f i do n't remember for voice - unvoice , phd d: this is for the other one . i should i ca n't show that . but that fifty - five was for the when the output are the fifty - six phone . phd d: that i look in the with the other nnn the other mlp that we have are more or less the same number . silence will be better but more or less the same . professor c: at the frame level for fifty - six that was the number we were getting for reduced band width . phd d: that for the other one , for the three output , is sixty - two , sixty three more or less . professor c: but even i in , in training . still , actually , so this is a test that you should do then . , if you 're getting fifty - six percent over here , that 's in noise also , right ? if you 're getting fifty - six here , try adding together the probabilities of all of the voiced phones here and all of the unvoiced phones professor c: and see what you get then . i bet you get better than sixty - three . phd d: i , but i th that we i have the result more or less . maybe . i do n't i ' m not but i remember @ that i ca n't show that . professor c: ok , but that 's a that is a good check point , you should do that anyway , ok ? given this regular old net that 's just for choosing for other purposes , add up the probabilities of the different subclasses and see how you do . and that anything that you do over here should be at least as good as that . phd d: noisy timit . we have noisy timit with the noise of the ti - digits . and now we have another noisy timit also with the noise of italian database . professor c: i see . there 's gon na be it looks like there 's gon na be a noisy some large vocabulary noisy too . somebody 's preparing . professor c: i forget what it 'll be , resource management , wall street journal , something . some some read task actually , that they 're preparing . professor c: , so the , the issue is whether people make a decision now based on what they ' ve already seen , or they make it later . and one of the arguments for making it later is let 's make that whatever techniques that we 're using work for something more than connected digits . phd d: this is the work that i did during this date and also mmm i h hynek last week say that if i have time to begin to study the france telecom proposal to look at the code and something like that to know exactly what they are doing because maybe that we can have some ideas but not only to read the proposal . look insi look i carefully what they are doing with the program @ and i begin to work also in that . but the first thing that i do n't understand is that they are using r - the log energy that this quite i why they have some constant in the expression of the lower energy . i what that means . professor c: , at the front it says " log energy is equal to the rounded version of sixteen over the log of two " professor c: , this is natural log , and maybe it has something to do with the fact that this is i have no idea . professor c: , that 's what i was thinking , but , then there 's the sixty - four , i . phd d: because maybe they 're the threshold that they are using on the basis of this value phd d: i exactly , because th maybe they have a meaning . but i what is the meaning of take exactly this value . phd e: so they 're taking the number inside the log and raising it to sixteen over log base two . professor c: if we ignore the sixteen , the natural log of t one over the natural log of two times the natu , maybe somebody 'll think of something , but this is it may just be that they want to have for very small energies , they want to have some a phd d: , the e the effect i do n't @ understand the effect of this , no ? because it 's to do something like that . professor c: , it says , since you 're taking a natural log , it says that when you get down to essentially zero energy , this is gon na be the natural log of one , which is zero . professor c: so it 'll go down to { nonvocalsound } the natural log being so the lowest value for this would be zero . so y you 're restricted to being positive . and this smooths it for very small energies . , why they chose sixty - four and something else , that was probably just experimental . and the constant in front of it , i have no idea . phd d: . i will look to try if i move this parameter in their code what happens , maybe everything is maybe they tres hole are on basis of this . professor c: it they probably have some fi particular s fixed point arithmetic that they 're using , and then it just phd e: , i was just gon na say maybe it has something to do with hardware , something they were doing . professor c: that they 're s probably working with fixed point or integer . you 're supposed to on this anyway , and so maybe that puts it in the right realm somewhere . professor c: , given at the level you 're doing things in floating point on the computer , i do n't think it matters , would be my , but . professor c: , he was gone these first few days , and then he 's here for a couple days before he goes to salt lake city . professor c: so he 's going to icassp which is good . i if there are many people who are going to icassp so , make somebody go . professor c: , people are less consistent about going to icassp and it 's still a reasonable forum for students to present things . , it 's for engineering students of any kind , it 's if you have n't been there much , it 's good to go to , to get a feel for things , a range of things , not just speech . but for dyed - in - the - wool speech people , that icslp and eurospeech are much more targeted . and then there 's these other meetings , like hlt and asru so there 's actually plenty of meetings that are really relevant to computational speech processing of one sort or another . so . , i mostly just ignored it because i was too busy and did n't get to it . wanna talk a little bit about what we were talking about this morning ? just briefly , or or anything else ? grad a: so . i some of the progress , i ' ve been getting a getting my committee members for the quals . and so far i have morgan and hynek , mike jordan , and i asked john ohala and he . . so i ' m need to ask malek . one more . tsk . then i talked a little bit about continuing with these dynamic ev acoustic events , and we 're thinking about a way to test the completeness of a set of dynamic events . , completeness in the sense that if we pick these x number of acoustic events , do they provide sufficient coverage for the phones that we 're trying to recognize or the f the words that we 're gon na try to recognize later on . and so morgan and i were discussing s a form of a cheating experiment where we get we have a chosen set of features , or acoustic events , and we train up a hybrid system to do phone recognition on timit . so i the idea is if we get good phone recognition results , using these set of acoustic events , then that says that these acoustic events are g sufficient to cover a set of phones , at least found in timit . so i it would be a measure of " are we on the right track with the choices of our acoustic events " . , so that 's going on . and also , just working on my final project for jordan 's class , which is professor c: let me back up while we 're still on it . the the other thing i was suggesting , though , is that given that you 're talking about binary features , maybe the first thing to do is just to count and count co - occurrences and get probabilities for a discrete hmm cuz that 'd be pretty simple because it 's just say , if you had ten events , that you were counting , each frame would only have a thousand possible values for these ten bits , and so you could make a table that would say , if you had thirty - nine phone categories , that would be a thousand by thirty - nine , and just count the co - occurrences and divide them by the occ count the co - occurrences between the event and the phone and divide them by the number of occurrences of the phone , and that would give you the likelihood of the event given the phone . and then just use that in a very simple hmm and you could do phone recognition then and would n't have any of the issues of the training of the net or , it 'd be on the simple side , but , if the example i was giving was that if you had onset of voicing and end of voicing as being two kinds of events , then if you had those a all marked correctly , and you counted co - occurrences , you should get it completely right . but you 'd get all the other distinctions , randomly wrong . there 'd be nothing to tell you that . if you just do this by counting , then you should be able to find out in a pretty straightforward way whether you have a sufficient set of events to do the level of classification of phones that you 'd like . so that was the idea . and then the other thing that we were discussing was ok , how do you get the your training data . cuz the switchboard transcription project was half a dozen people , or so working off and on over a couple years , similar amount of data to what you 're talking about with timit training . so , it seems to me that the only reasonable starting point is to automatically translate the current timit markings into the markings you want . and it wo n't have the characteristic that you 'd like , of catching funny things that maybe are n't there from these automatic markings , it 's professor c: and a short amount of time , just to again , just to see if that information is sufficient to determine the phones . phd e: , you could even then to get an idea about how different it is , you could maybe take some subset and , go through a few sentences , mark them by hand and then see how different it is from , the canonical ones , just to get an idea a rough idea of h if it really even makes a difference . professor c: you can get a little feeling for it that way , that is probably right . my would be that this is since timit 's read speech that this would be less of a big deal , if you went and looked at spontaneous speech it 'd be more of one . professor c: and the other thing would be , say , if you had these ten events , you 'd wanna see , what if you took two events or four events or ten events or t and , and hopefully there should be some point at which having more information does n't tell you really all that much more about what the phones are . professor c: , you could , but , what he 's talking about here is a translation to a per - frame feature vector , so there 's no sequence in that , it 's just a professor c: just . the idea is with a very simple statistical structure , could you at least verify that you ' ve chosen features that are sufficient . ok , and you were saying something starting to say something else about your class project , or ? grad a: th . so for my class project i ' m tinkering with support vector machines ? something that we learned in class , and just another method for doing classification . and so i ' m gon na apply that to compare it with the results by king and taylor who did these using recurrent neural nets , they recognized a set of phonological features and made a mapping from the mfcc 's to these phonological features , so i ' m gon na do a similar thing with support vector machines and see if grad a: . so , support vector machines are good with dealing with a less amount of data and so if you give it less data it still does a reasonable job in learning the patterns . and phd e: does there some a distance metric that they use or how do they for cla what do they do for classification ? grad a: so , the simple idea behind a support vector machine is , you have this feature space , right ? and then it finds the optimal separating plane , between these two different classes , and so , what it i at the end of the day , what it actually does is it picks those examples of the features that are closest to the separating boundary , and remembers those and uses them to recreate the boundary for the test set . so , given these features , or these examples , critical examples , which they call support f support vectors , then given a new example , if the new example falls away from the boundary in one direction then it 's classified as being a part of this particular class and otherwise it 's the other class . phd e: so why save the examples ? why not just save what the boundary itself is ? professor c: that 's another way of doing it . right ? so so it i it 's professor c: , it goes back to nearest - neighbor thing , i if is it w when is nearest - neighbor good ? , nearest - neighbor good is good if you have lots and lots of examples . but if you have lots and lots of examples , then it can take a while to use nearest - neighbor . there 's lots of look ups . so a long time ago people talked about things where you would have a condensed nearest - neighbor , where you would pick out some representative examples which would be sufficient to represent to correctly classify everything that came in . s support vector goes back to that thing . phd e: i see . so rather than doing nearest neighbor where you compare to every single one , you just pick a few critical ones , and professor c: and th the , neural net approach or gaussian mixtures for that matter are fairly brute force kinds of things , where you predefine that there is this big bunch of parameters and then you place them as you best can to define the boundaries , and , as , these things do take a lot of parameters and if you have only a modest amount of data , you have trouble learning them . , so i the idea to this is that it is reputed to be somewhat better in that regard . grad a: i it can be a reduced parameterization of the model by just keeping certain selected examples . so . professor c: but i if people have done careful comparisons of this on large tasks or anything . maybe maybe they have . grad a: actually you do n't get a number between zero and one . you get you get either a zero or a one . , there are pap , it 's you get a distance measure at the end of the day , and then that distance measure is translated to a zero or one . phd e: and you get that for each class , you get a zero or a one . professor c: cuz actually mississippi state people did use support vector machines for speech recognition and they were using it to estimate probabilities . grad a: , they had a way to translate the distances into probabilities with the simple sigmoidal function . professor c: and d did they use sigmoid or a softmax type thing ? and did n't they like exponentiate professor c: and then divide by the sum of them , or ? it i , so it is a sigmoidal . alright . professor c: , they 're ok , i do n't think they were earth shattering , but that this was a couple years ago , i remember them doing it at some meeting , and i do n't think people were very critical because it was interesting just to try this and , it was the first time they tried it , so the , the numbers were not incredibly good but there 's , it was th reasonable . i do n't remember anymore . i do n't even remember what the task was , it was broadcast news , . grad b: s so barry , if you just have zero and ones , how are you doing the speech recognition ? grad a: i ' m not do i ' m not planning on doing speech recognition with it . i ' m just doing detection of phonological features . grad a: so , this feature set called the sound patterns of english is just a bunch of binary valued features . let 's say , is this voicing , or is this not voicing , is this sonorants , not sonorants , and like that . grad a: i have n't gone through the entire table , yet . yesterday i brought chuck the table and i was like , " , this is is the mapping from n to this phonological feature called " coronal " , is should it be should n't it be a one ? or should it be coronal instead of not coronal as it was labelled in the paper ? " so i ha have n't hunted down all the mistakes yet , but professor c: but a as i was saying , people do get probabilities from these things , and we were just trying to remember how they do , but people have used it for speech recognition , and they have gotten probabilities . so they have some conversion from these distances to probabilities . professor c: there 's you have the paper , the mississippi state paper ? , if you 're interested y you could look , phd e: so in your in the thing that you 're doing , you have a vector of ones and zeros for each phone ? grad a: right , right f so for every phone there is a vector of ones and zeros f corresponding to whether it exhibits a particular phonological feature or not . phd e: and so when you do your wh i ' m what is the task for the class project ? to come up with the phones ? or to come up with these vectors to see how closely they match the phones , grad a: right , to come up with a mapping from mfcc 's or s some feature set , to w to whether there 's existence of a particular phonological feature . and it 's to learn a mapping from the mfcc 's to phonological features . is it did that answer your question ? phd e: i , i ' m not what you 're what you get out of your system . do you get out a vector of these ones and zeros and then try to find the closest matching phoneme to that vector , grad a: no , no . i ' m not planning to do any phoneme mapping yet . just it 's it 's really simple , a detection of phonological features . and cuz the so king and taylor did this with recurrent neural nets , and this i their idea was to first find a mapping from mfcc 's to phonological features and then later on , once you have these phonological features , then map that to phones . so i ' m reproducing phase one of their . phd e: i wo did they compare that , what if you just did phone recognition and did the reverse lookup . so you recognize a phone and which ever phone was recognized , you spit out it 's vector of ones and zeros . professor c: i expect you could do that . that 's probably not what he 's going to do on his class project . professor c: so have you had a chance to do this thing we talked about yet with the professor c: . no actually i was going a different that 's a good question , too , but i was gon na ask about the changes to the data in comparing plp and mel cepstrum for the sri system . professor c: so we talked on the phone about this , that there was still a difference of a few percent and you told me that there was a difference in how the normalization was done . and i was asking if you were going to do redo it for plp with the normalization done as it had been done for the mel cepstrum . phd e: right , no i have n't had a chance to do that . what i ' ve been doing is trying to figure out it just seems to me like there 's a it seems like there 's a bug , because the difference in performance is it 's not gigantic but it 's big enough that it seems wrong . phd e: , but i do n't i ' m not , i do n't think that the normalization difference is gon na account for everything . so what i was working on is just going through and checking the headers of the wavefiles , to see if maybe there was a certain type of compression that was done that my script was n't catching . so that for some subset of the training data , the features i was computing were junk . which would it to perform ok , but , the models would be all messed up . so i was going through and just double - checking that think first , to see if there was just some obvious bug in the way that i was computing the features . looking the sampling rates to make all the sampling rates were what eight k , what i was assuming they were , phd e: so i was doing that first , before i did these other things , just to make there was n't something professor c: although really , a couple three percent difference in word error rate could easily come from some difference in normalization , i would think . phd e: and , hhh i ' m trying to remember but i recall that andreas was saying that he was gon na run the reverse experiment . which is to try to emulate the normalization that we did but with the mel cepstral features . , back up from the system that he had . he said he was gon na i have to look back through my email from him . professor c: the i sh think they should be roughly equivalent , again the cambridge folk found the plp actually to be a little better . so it 's the other thing i wonder about was whether there was something just in the bootstrapping of their system which was based on but maybe not , since they phd e: see one thing that 's a little bit i was looking i ' ve been studying and going through the logs for the system that andreas created . and his the way that the s r i system looks like it works is that it reads the wavefiles directly , and does all of the cepstral computation on the fly . and , so there 's no place where these where the cepstral files are stored , anywhere that go look at and compare to the plp ones , so whereas with our features , he 's actually storing the cepstrum on disk , and he reads those in . but it looked like he had to give it even though the cepstrum is already computed , he has to give it a front - end parameter file . which talks about the com computation that his mel cepstrum thing does , so i i if that it probably does n't mess it up , it probably just ignores it if it determines that it 's already in the right format but the two processes that happen are a little different . so . professor c: so anyway , there 's there to sort out . so , let 's go back to what you thought i was asking you . phd e: . i ' ve been , i ' ve been working with jeremy on his project and then i ' ve been trying to track down this bug in the icsi front - end features . so one thing that i did notice , yesterday i was studying the rasta code and it looks like we do n't have any way to control the frequency range that we use in our analysis . we it looks to me like we do the fft , and then we just take all the bins and we use everything . we do n't have any set of parameters where we can say , " only process from a hundred and ten hertz to thirty - seven - fifty " . at least i could n't see any control for that . professor c: , i do n't think it 's in there , it 's in the filters . so , the f t is on everything , but the filters , ignore the lowest bins and the highest bins . and what it does is it copies professor c: it 's bark scale , and it 's it actually copies the second filters over to the first . so the first filters are always and you can s you can specify a different number of features different number of filters , as i recall . so you can specify a different number of filters , and whatever you specify , the last ones are gon na be ignored . so that 's a way that you change what the bandwidth is . y you ca n't do it without changing the number of filters , phd e: i saw something about that looked like it was doing something like that , but i did n't quite understand it . so maybe professor c: , so the idea is that the very lowest frequencies and typically the veriest highest frequencies are junk . and so you just for continuity you just approximate them by the second to highest and second to lowest . it 's just a simple thing we put in . and and so if you h professor c: but see my point ? if you had ten filters , then you would be throwing away a lot at the two ends . and if you had fifty filters , you 'd be throwing away hardly anything . , i do n't remember there being an independent way of saying " we 're just gon na make them from here to here " . professor c: but i , it 's actually been awhile since i ' ve looked at it . phd e: , i went through the feacalc code and then looked at just calling the rasta libs and thing like that . and i did n't i could n't see any wh place where that thing was done . but i did n't quite understand everything that i saw , professor c: but it calls rasta with some options , but in i for some particular database you might find that you could tune that and tweak that to get that a little better , but that in general it 's not that critical . there 's you can throw away below a hundred hertz or so and it 's just not going to affect phonetic classification . phd e: another thing i was thinking about was is there a i was wondering if there 's maybe certain settings of the parameters when you compute plp which would it to output mel cepstrum . so that , in effect , what i could do is use our code but produce mel cepstrum and compare that directly to professor c: , it 's not precisely . what you can do is you can definitely change the filter bank from being a trapezoidal integration to a triangular one , which is what the typical mel cepstral filter bank does . and some people have claimed that they got some better performance doing that , so you certainly could do that easily . but the fundamental difference , there 's other small differences professor c: but , as opposed to the log in the other case . the fundamental d difference that we ' ve seen any difference from before , which is actually an advantage for the p l p i , is that the smoothing at the end is auto - regressive instead of being cepstral , from cepstral truncation . so it 's a little more noise robust . , and that 's why when people started getting databases that had a little more noise in it , like broadcast news and so on , that 's why c cambridge switched to plp . that 's a difference that i do n't think we put any way to get around , since it was an advantage . but we did hear this comment from people at some point , that it they got some better results with the triangular filters rather than the trapezoidal . so that is an option in rasta . and you can certainly play with that . but you 're probably doing the right thing to look for bugs first . phd e: just it just seems like this behavior could be caused by s some of the training data being messed up . phd e: , you 're getting most of the way there , but there 's a so i started going through and looking one of the things that i did notice was that the log likelihoods coming out of the log recognizer from the plp data were much lower , much smaller , than for the mel cepstral , and that the average amount of pruning that was happening was therefore a little bit higher for the plp features . phd e: so , since he used the same exact pruning thresholds for both , i was wondering if it could be that we 're getting more pruning . professor c: he he he used the identical pruning thresholds even though the s the range of p of the likeli that 's that 's a pretty good point right there . i would think that you might wanna do something like , look at a few points to see where you are starting to get significant search errors . phd e: that 's , what i was gon na do is i was gon na take a couple of the utterances that he had run through , then run them through again but modify the pruning threshold and see if it , affects the score . professor c: but you could if that looks promising you could , r run the overall test set with a few different pruning thresholds for both , and presumably he 's running at some pruning threshold that 's , gets very few search errors but is relatively fast phd e: right . , generally in these things you turn back pruning really far , so i did n't think it would be that big a deal because i was figuring you have it turned back so far that it professor c: but you may be in the wrong range for the p l p features for some reason . phd e: and the run time of the recognizer on the plp features is longer which implies that the networks are bushier , there 's more things it 's considering which goes along with the fact that the matches are n't as good . so , it could be that we 're just pruning too much . professor c: , maybe just be different distributions so that 's another possible thing . they they should really should n't there 's no particular reason why they would be exactly behave exactly the same . phd e: - . right . there 's lots of little differences . trying to track it down . professor c: i this was a little bit off topic , i , because i was thinking in terms of th this as being a core item that once we had it going we would use for a number of the front - end things also . wanna grad b: , i tried this mean subtraction method . due to avendano , i ' m taking s six seconds of speech , i ' m using two second fft analysis frames , stepped by a half second so it 's a quarter length step and i take that frame and four f the four i take the current frame and the four past frames and the four future frames and that adds up to six seconds of speech . and i calculate the spectral mean , of the log magnitude spectrum over that n . i use that to normalize the s the current center frame by mean subtraction . and i then i move to the next frame and i do it again . , actually i calculate all the means first and then i do the subtraction . and the i tried that with hdk , the aurora setup of hdk training on clean ti - digits , and it helped in a phony reverberation case where used the simulated impulse response the error rate went from something like eighty it was from something like eighteen percent to four percent . and on meeting rec recorder far mike digits , mike on channel f , it went from forty - one percent error to eight percent error . grad b: right . and that was trained on clean speech only , which i ' m guessing is the reason why the baseline was so bad . professor c: that 's ac actually a little side point is that 's the first results that we have of any sort on the far field data for recorded in meetings . grad b: on the far field also . he did one pzm channel and one pda channel . professor c: did he ? i did n't recall that . what numbers was he getting with that ? grad b: i ' m not , it was about five percent error for the pzm channel . grad b: i ' m g i ' m guessing it was the training data . , clean ti - digits is , like , pretty pristine training data , and if they trained the sri system on this tv broadcast type , it 's a much wider range of channels and it professor c: but a minute . i th he what am i saying here ? so that was the sri system . maybe you 're right . cuz it was getting like one percent so it 's still this ratio . it was it was getting one percent on the near field . was n't it ? professor c: . it was getting around one percent for the near for the n for the close mike . professor c: so it was like one to five so it 's still this ratio . it 's just it 's a lot more training data . so probably it should be something we should try then is to see if is at some point just to take i to transform the data and then use th use it for the sri system . professor c: so you 're so you have a system which for one reason or another is relatively poor , and you have something like forty - one percent error and then you transform it to eight by doing this work . so here 's this other system , which is a lot better , but there 's still this ratio . it 's something like five percent error with the distant mike , and one percent with the close mike . so the question is how close to that one can you get if you transform the data using that system . grad b: r right , so i this sri system is trained on a lot of s broadcast news or switchboard data . is that right ? do which one it is ? phd e: it 's trained on a lot of different things . it 's trained on a lot of switchboard , call home , phd e: a bunch of different sources , some digits , there 's some digits training in there . grad b: o one thing i ' m wondering about is what this mean subtraction method will do if it 's faced with additive noise . cuz i it 's cuz i what log magnitude spectral subtraction is gon na do to additive noise . that 's that 's the professor c: , it 's not exactly the right thing but you ' ve already seen that cuz there is added noise here . grad b: that 's that 's , that 's true . that 's a good point . ok , so it 's then it 's reasonable to expect it would be helpful if we used it with the sri system and professor c: , as helpful so that 's the question . , w we 're often asked this when we work with a system that is n't industry standard great , and we see some reduction in error using some clever method , then , will it work on a on a good system . , this other one 's it was a pretty good system . , one percent word error rate on digits is digit strings is not stellar , but given that this is real digits , as opposed to laboratory professor c: and it was n't trained on this task . actually one percent is , in a reasonable range . people would say " , i could imagine getting that " . and so the four or five percent is quite poor . , if you 're doing a sixteen digit credit card number you 'll get it wrong almost all the time . so . , a significant reduction in the error for that would be great . professor c: alright , i actually have to run . so i do n't think do the digits , but , i 'll leave my microphone on ? professor c: be out of here quickly . that 's have to run for another appointment . ok , i t . i left it on .
the icsi meeting recorder group met once more to discuss their recent progress in various projects , as well as discuss some of the issues that have arisen in the last week. there has been further work on voiced/unvoiced detection , along with spectral subtraction. the group discussed one members attendance at a conference , and another groups code , which is proving hard to follow. in trying to understand france telecom's code , there does not appear to be a reason for a particular constant value. speaker fn002 reported on her work with mn007 , who was absent attending a conference. they have been working on voiced/unvoiced detection , and have tried two different configuration of mlp , although results have not been improved greatly. she has also begun looking at france telecom's recogniser code. speaker me006 has been coming up with tests for his work using acoustic events. speaker me018 is investigating differences in another system , and feels it maybe down to bugs. speaker me026 has made real progress on his spectral subtraction work , with an implementation and early results.
###dialogue: phd d: , i will try to explain the thing that i did this week during this week . that i work i begin to work with a new feature to detect voice - unvoice . phd d: what i trying two mlp to the with this new feature and the fifteen feature from the bus base system phd d: and i ' m trying two mlp , one that only have t three output , voice , unvoice , and silence , phd d: and other one that have fifty - six output . the probabilities of the allophone . and i tried to do some experiment of recognition with that and only have result with the mlp with the three output . and i put together the fifteen features and the three mlp output . and , the result are li a little bit better , but more or less similar . professor c: , i ' m slightly confused . what what feeds the three - output net ? phd d: the feature the input ? the inputs are the fifteen bases feature . the with the new code . and the other three features are r , the variance of the difference between the two spectrum , the variance of the auto - correlation function , except the first point , because half the height value is r - zero and also r - zero , the first coefficient of the auto - correlation function . that is like the energy with these three feature , professor c: you would n't do like r - one over r - zero like that ? usually for voiced - unvoiced you 'd do , you 'd do something you 'd do energy but then you have something like spectral slope , which is you get like r - one ov over r - zero like that . phd d: no , r c no . auto - correlation ? yes , yes , the variance of the auto - correlation function that uses that professor c: ye - that 's the variance , but if you just say " what is " , to first order , one of the differences between voiced , unvoiced and silence is energy . another one is but the other one is the spectral shape . professor c: , and so r - one over r - zero is what you typically use for that . professor c: see , because it because this is just like a single number to tell you " does the spectrum look like that or does it look like that " . professor c: right ? so if it 's if it 's low energy but the spectrum looks like that or like that , it 's probably silence . but if it 's low energy and the spectrum looks like that , it 's probably unvoiced . so if you just had to pick two features to determine voiced - unvoiced , you 'd pick something about the spectrum like r - one over r - zero , and r - zero professor c: or i you 'd have some other energy measure and like in the old days people did like zero crossing counts . phd d: , also th use this . bec - because the result are a little bit better but we have in a point that everything is more or less the similar more or less similar . professor c: right , but it seemed to me that what you were getting at before was that there is something about the difference between the original signal or the original fft and with the filter which is what and the variance was one take on it . professor c: but it could be something else . suppose you did n't have anything like that . then in that case , if you have two nets , alright , and this one has three outputs , and this one has f whatever , fifty - six , if you were to sum up the probabilities for the voiced and for the unvoiced and for the silence here , we ' ve found in the past you 'll do better at voiced - unvoiced - silence than you do with this one . so just having the three output thing does n't really buy you anything . the issue is what you feed it . phd e: so you 're saying take the features that go into the voiced - unvoiced - silence net and feed those into the other one , as additional inputs , rather than having a separate professor c: w w that 's another way . that was n't what i was saying but that 's certainly another thing to do . no i was just trying to say if you b if you bring this into the picture over this , what more does it buy you ? professor c: and what i was saying is that the only thing that it buys you is based on whether you feed it something different . and something different in some fundamental way . and so the thing that she was talking about before , was looking at something ab something about the difference between the log fft log power and the log magnitude f - spectrum and the filter bank . and so the filter bank is chosen to integrate out the effects of pitch and she 's saying trying so the particular measure that she chose was the variance of this m of this difference , but that might not be the right number . professor c: maybe there 's something about the variance that 's not enough or maybe there 's something else that one could use , but that , for me , the thing that struck me was that you wanna get something back here , so here 's an idea . what about it you skip all the really clever things , and just fed the log magnitude spectrum into this ? professor c: this is f you have the log magnitude spectrum , and you were looking at that and the difference between the filter bank and c computing the variance . that 's a clever thing to do . what if you stopped being clever ? and you just took this thing in here because it 's a neural net and neural nets are wonderful and figure out what they can what they most need from things , and that 's what they 're good at . so you 're trying to be clever and say what 's the statistic that should we should get about this difference but , maybe just feeding this in or feeding both of them in , another way , saying let it figure out what 's the what is the interaction , especially if you do this over multiple frames ? then you have this over time , and both kinds of measures and you might get something better . professor c: that 's another thing you could do , it seems to me , if you have exactly the right thing then it 's better to do it without the net because otherwise you 're asking the net to learn this , say if you wanted to learn how to do multiplication . you could feed it a bunch of s you could feed two numbers that you wanted to multiply into a net and have a bunch of nonlinearities in the middle and train it to get the product of the output and it would work . but , it 's crazy , cuz we know how to multiply and you 'd be much lower error usually if you just multiplied it out . but suppose you do n't really the right thing is . and that 's what these dumb machine learning methods are good at . so . anyway . it 's just a thought . phd e: how long does it take , carmen , to train up one of these nets ? phd d: the accuracy . no for , yes f i do n't remember for voice - unvoice , phd d: this is for the other one . i should i ca n't show that . but that fifty - five was for the when the output are the fifty - six phone . phd d: that i look in the with the other nnn the other mlp that we have are more or less the same number . silence will be better but more or less the same . professor c: at the frame level for fifty - six that was the number we were getting for reduced band width . phd d: that for the other one , for the three output , is sixty - two , sixty three more or less . professor c: but even i in , in training . still , actually , so this is a test that you should do then . , if you 're getting fifty - six percent over here , that 's in noise also , right ? if you 're getting fifty - six here , try adding together the probabilities of all of the voiced phones here and all of the unvoiced phones professor c: and see what you get then . i bet you get better than sixty - three . phd d: i , but i th that we i have the result more or less . maybe . i do n't i ' m not but i remember @ that i ca n't show that . professor c: ok , but that 's a that is a good check point , you should do that anyway , ok ? given this regular old net that 's just for choosing for other purposes , add up the probabilities of the different subclasses and see how you do . and that anything that you do over here should be at least as good as that . phd d: noisy timit . we have noisy timit with the noise of the ti - digits . and now we have another noisy timit also with the noise of italian database . professor c: i see . there 's gon na be it looks like there 's gon na be a noisy some large vocabulary noisy too . somebody 's preparing . professor c: i forget what it 'll be , resource management , wall street journal , something . some some read task actually , that they 're preparing . professor c: , so the , the issue is whether people make a decision now based on what they ' ve already seen , or they make it later . and one of the arguments for making it later is let 's make that whatever techniques that we 're using work for something more than connected digits . phd d: this is the work that i did during this date and also mmm i h hynek last week say that if i have time to begin to study the france telecom proposal to look at the code and something like that to know exactly what they are doing because maybe that we can have some ideas but not only to read the proposal . look insi look i carefully what they are doing with the program @ and i begin to work also in that . but the first thing that i do n't understand is that they are using r - the log energy that this quite i why they have some constant in the expression of the lower energy . i what that means . professor c: , at the front it says " log energy is equal to the rounded version of sixteen over the log of two " professor c: , this is natural log , and maybe it has something to do with the fact that this is i have no idea . professor c: , that 's what i was thinking , but , then there 's the sixty - four , i . phd d: because maybe they 're the threshold that they are using on the basis of this value phd d: i exactly , because th maybe they have a meaning . but i what is the meaning of take exactly this value . phd e: so they 're taking the number inside the log and raising it to sixteen over log base two . professor c: if we ignore the sixteen , the natural log of t one over the natural log of two times the natu , maybe somebody 'll think of something , but this is it may just be that they want to have for very small energies , they want to have some a phd d: , the e the effect i do n't @ understand the effect of this , no ? because it 's to do something like that . professor c: , it says , since you 're taking a natural log , it says that when you get down to essentially zero energy , this is gon na be the natural log of one , which is zero . professor c: so it 'll go down to { nonvocalsound } the natural log being so the lowest value for this would be zero . so y you 're restricted to being positive . and this smooths it for very small energies . , why they chose sixty - four and something else , that was probably just experimental . and the constant in front of it , i have no idea . phd d: . i will look to try if i move this parameter in their code what happens , maybe everything is maybe they tres hole are on basis of this . professor c: it they probably have some fi particular s fixed point arithmetic that they 're using , and then it just phd e: , i was just gon na say maybe it has something to do with hardware , something they were doing . professor c: that they 're s probably working with fixed point or integer . you 're supposed to on this anyway , and so maybe that puts it in the right realm somewhere . professor c: , given at the level you 're doing things in floating point on the computer , i do n't think it matters , would be my , but . professor c: , he was gone these first few days , and then he 's here for a couple days before he goes to salt lake city . professor c: so he 's going to icassp which is good . i if there are many people who are going to icassp so , make somebody go . professor c: , people are less consistent about going to icassp and it 's still a reasonable forum for students to present things . , it 's for engineering students of any kind , it 's if you have n't been there much , it 's good to go to , to get a feel for things , a range of things , not just speech . but for dyed - in - the - wool speech people , that icslp and eurospeech are much more targeted . and then there 's these other meetings , like hlt and asru so there 's actually plenty of meetings that are really relevant to computational speech processing of one sort or another . so . , i mostly just ignored it because i was too busy and did n't get to it . wanna talk a little bit about what we were talking about this morning ? just briefly , or or anything else ? grad a: so . i some of the progress , i ' ve been getting a getting my committee members for the quals . and so far i have morgan and hynek , mike jordan , and i asked john ohala and he . . so i ' m need to ask malek . one more . tsk . then i talked a little bit about continuing with these dynamic ev acoustic events , and we 're thinking about a way to test the completeness of a set of dynamic events . , completeness in the sense that if we pick these x number of acoustic events , do they provide sufficient coverage for the phones that we 're trying to recognize or the f the words that we 're gon na try to recognize later on . and so morgan and i were discussing s a form of a cheating experiment where we get we have a chosen set of features , or acoustic events , and we train up a hybrid system to do phone recognition on timit . so i the idea is if we get good phone recognition results , using these set of acoustic events , then that says that these acoustic events are g sufficient to cover a set of phones , at least found in timit . so i it would be a measure of " are we on the right track with the choices of our acoustic events " . , so that 's going on . and also , just working on my final project for jordan 's class , which is professor c: let me back up while we 're still on it . the the other thing i was suggesting , though , is that given that you 're talking about binary features , maybe the first thing to do is just to count and count co - occurrences and get probabilities for a discrete hmm cuz that 'd be pretty simple because it 's just say , if you had ten events , that you were counting , each frame would only have a thousand possible values for these ten bits , and so you could make a table that would say , if you had thirty - nine phone categories , that would be a thousand by thirty - nine , and just count the co - occurrences and divide them by the occ count the co - occurrences between the event and the phone and divide them by the number of occurrences of the phone , and that would give you the likelihood of the event given the phone . and then just use that in a very simple hmm and you could do phone recognition then and would n't have any of the issues of the training of the net or , it 'd be on the simple side , but , if the example i was giving was that if you had onset of voicing and end of voicing as being two kinds of events , then if you had those a all marked correctly , and you counted co - occurrences , you should get it completely right . but you 'd get all the other distinctions , randomly wrong . there 'd be nothing to tell you that . if you just do this by counting , then you should be able to find out in a pretty straightforward way whether you have a sufficient set of events to do the level of classification of phones that you 'd like . so that was the idea . and then the other thing that we were discussing was ok , how do you get the your training data . cuz the switchboard transcription project was half a dozen people , or so working off and on over a couple years , similar amount of data to what you 're talking about with timit training . so , it seems to me that the only reasonable starting point is to automatically translate the current timit markings into the markings you want . and it wo n't have the characteristic that you 'd like , of catching funny things that maybe are n't there from these automatic markings , it 's professor c: and a short amount of time , just to again , just to see if that information is sufficient to determine the phones . phd e: , you could even then to get an idea about how different it is , you could maybe take some subset and , go through a few sentences , mark them by hand and then see how different it is from , the canonical ones , just to get an idea a rough idea of h if it really even makes a difference . professor c: you can get a little feeling for it that way , that is probably right . my would be that this is since timit 's read speech that this would be less of a big deal , if you went and looked at spontaneous speech it 'd be more of one . professor c: and the other thing would be , say , if you had these ten events , you 'd wanna see , what if you took two events or four events or ten events or t and , and hopefully there should be some point at which having more information does n't tell you really all that much more about what the phones are . professor c: , you could , but , what he 's talking about here is a translation to a per - frame feature vector , so there 's no sequence in that , it 's just a professor c: just . the idea is with a very simple statistical structure , could you at least verify that you ' ve chosen features that are sufficient . ok , and you were saying something starting to say something else about your class project , or ? grad a: th . so for my class project i ' m tinkering with support vector machines ? something that we learned in class , and just another method for doing classification . and so i ' m gon na apply that to compare it with the results by king and taylor who did these using recurrent neural nets , they recognized a set of phonological features and made a mapping from the mfcc 's to these phonological features , so i ' m gon na do a similar thing with support vector machines and see if grad a: . so , support vector machines are good with dealing with a less amount of data and so if you give it less data it still does a reasonable job in learning the patterns . and phd e: does there some a distance metric that they use or how do they for cla what do they do for classification ? grad a: so , the simple idea behind a support vector machine is , you have this feature space , right ? and then it finds the optimal separating plane , between these two different classes , and so , what it i at the end of the day , what it actually does is it picks those examples of the features that are closest to the separating boundary , and remembers those and uses them to recreate the boundary for the test set . so , given these features , or these examples , critical examples , which they call support f support vectors , then given a new example , if the new example falls away from the boundary in one direction then it 's classified as being a part of this particular class and otherwise it 's the other class . phd e: so why save the examples ? why not just save what the boundary itself is ? professor c: that 's another way of doing it . right ? so so it i it 's professor c: , it goes back to nearest - neighbor thing , i if is it w when is nearest - neighbor good ? , nearest - neighbor good is good if you have lots and lots of examples . but if you have lots and lots of examples , then it can take a while to use nearest - neighbor . there 's lots of look ups . so a long time ago people talked about things where you would have a condensed nearest - neighbor , where you would pick out some representative examples which would be sufficient to represent to correctly classify everything that came in . s support vector goes back to that thing . phd e: i see . so rather than doing nearest neighbor where you compare to every single one , you just pick a few critical ones , and professor c: and th the , neural net approach or gaussian mixtures for that matter are fairly brute force kinds of things , where you predefine that there is this big bunch of parameters and then you place them as you best can to define the boundaries , and , as , these things do take a lot of parameters and if you have only a modest amount of data , you have trouble learning them . , so i the idea to this is that it is reputed to be somewhat better in that regard . grad a: i it can be a reduced parameterization of the model by just keeping certain selected examples . so . professor c: but i if people have done careful comparisons of this on large tasks or anything . maybe maybe they have . grad a: actually you do n't get a number between zero and one . you get you get either a zero or a one . , there are pap , it 's you get a distance measure at the end of the day , and then that distance measure is translated to a zero or one . phd e: and you get that for each class , you get a zero or a one . professor c: cuz actually mississippi state people did use support vector machines for speech recognition and they were using it to estimate probabilities . grad a: , they had a way to translate the distances into probabilities with the simple sigmoidal function . professor c: and d did they use sigmoid or a softmax type thing ? and did n't they like exponentiate professor c: and then divide by the sum of them , or ? it i , so it is a sigmoidal . alright . professor c: , they 're ok , i do n't think they were earth shattering , but that this was a couple years ago , i remember them doing it at some meeting , and i do n't think people were very critical because it was interesting just to try this and , it was the first time they tried it , so the , the numbers were not incredibly good but there 's , it was th reasonable . i do n't remember anymore . i do n't even remember what the task was , it was broadcast news , . grad b: s so barry , if you just have zero and ones , how are you doing the speech recognition ? grad a: i ' m not do i ' m not planning on doing speech recognition with it . i ' m just doing detection of phonological features . grad a: so , this feature set called the sound patterns of english is just a bunch of binary valued features . let 's say , is this voicing , or is this not voicing , is this sonorants , not sonorants , and like that . grad a: i have n't gone through the entire table , yet . yesterday i brought chuck the table and i was like , " , this is is the mapping from n to this phonological feature called " coronal " , is should it be should n't it be a one ? or should it be coronal instead of not coronal as it was labelled in the paper ? " so i ha have n't hunted down all the mistakes yet , but professor c: but a as i was saying , people do get probabilities from these things , and we were just trying to remember how they do , but people have used it for speech recognition , and they have gotten probabilities . so they have some conversion from these distances to probabilities . professor c: there 's you have the paper , the mississippi state paper ? , if you 're interested y you could look , phd e: so in your in the thing that you 're doing , you have a vector of ones and zeros for each phone ? grad a: right , right f so for every phone there is a vector of ones and zeros f corresponding to whether it exhibits a particular phonological feature or not . phd e: and so when you do your wh i ' m what is the task for the class project ? to come up with the phones ? or to come up with these vectors to see how closely they match the phones , grad a: right , to come up with a mapping from mfcc 's or s some feature set , to w to whether there 's existence of a particular phonological feature . and it 's to learn a mapping from the mfcc 's to phonological features . is it did that answer your question ? phd e: i , i ' m not what you 're what you get out of your system . do you get out a vector of these ones and zeros and then try to find the closest matching phoneme to that vector , grad a: no , no . i ' m not planning to do any phoneme mapping yet . just it 's it 's really simple , a detection of phonological features . and cuz the so king and taylor did this with recurrent neural nets , and this i their idea was to first find a mapping from mfcc 's to phonological features and then later on , once you have these phonological features , then map that to phones . so i ' m reproducing phase one of their . phd e: i wo did they compare that , what if you just did phone recognition and did the reverse lookup . so you recognize a phone and which ever phone was recognized , you spit out it 's vector of ones and zeros . professor c: i expect you could do that . that 's probably not what he 's going to do on his class project . professor c: so have you had a chance to do this thing we talked about yet with the professor c: . no actually i was going a different that 's a good question , too , but i was gon na ask about the changes to the data in comparing plp and mel cepstrum for the sri system . professor c: so we talked on the phone about this , that there was still a difference of a few percent and you told me that there was a difference in how the normalization was done . and i was asking if you were going to do redo it for plp with the normalization done as it had been done for the mel cepstrum . phd e: right , no i have n't had a chance to do that . what i ' ve been doing is trying to figure out it just seems to me like there 's a it seems like there 's a bug , because the difference in performance is it 's not gigantic but it 's big enough that it seems wrong . phd e: , but i do n't i ' m not , i do n't think that the normalization difference is gon na account for everything . so what i was working on is just going through and checking the headers of the wavefiles , to see if maybe there was a certain type of compression that was done that my script was n't catching . so that for some subset of the training data , the features i was computing were junk . which would it to perform ok , but , the models would be all messed up . so i was going through and just double - checking that think first , to see if there was just some obvious bug in the way that i was computing the features . looking the sampling rates to make all the sampling rates were what eight k , what i was assuming they were , phd e: so i was doing that first , before i did these other things , just to make there was n't something professor c: although really , a couple three percent difference in word error rate could easily come from some difference in normalization , i would think . phd e: and , hhh i ' m trying to remember but i recall that andreas was saying that he was gon na run the reverse experiment . which is to try to emulate the normalization that we did but with the mel cepstral features . , back up from the system that he had . he said he was gon na i have to look back through my email from him . professor c: the i sh think they should be roughly equivalent , again the cambridge folk found the plp actually to be a little better . so it 's the other thing i wonder about was whether there was something just in the bootstrapping of their system which was based on but maybe not , since they phd e: see one thing that 's a little bit i was looking i ' ve been studying and going through the logs for the system that andreas created . and his the way that the s r i system looks like it works is that it reads the wavefiles directly , and does all of the cepstral computation on the fly . and , so there 's no place where these where the cepstral files are stored , anywhere that go look at and compare to the plp ones , so whereas with our features , he 's actually storing the cepstrum on disk , and he reads those in . but it looked like he had to give it even though the cepstrum is already computed , he has to give it a front - end parameter file . which talks about the com computation that his mel cepstrum thing does , so i i if that it probably does n't mess it up , it probably just ignores it if it determines that it 's already in the right format but the two processes that happen are a little different . so . professor c: so anyway , there 's there to sort out . so , let 's go back to what you thought i was asking you . phd e: . i ' ve been , i ' ve been working with jeremy on his project and then i ' ve been trying to track down this bug in the icsi front - end features . so one thing that i did notice , yesterday i was studying the rasta code and it looks like we do n't have any way to control the frequency range that we use in our analysis . we it looks to me like we do the fft , and then we just take all the bins and we use everything . we do n't have any set of parameters where we can say , " only process from a hundred and ten hertz to thirty - seven - fifty " . at least i could n't see any control for that . professor c: , i do n't think it 's in there , it 's in the filters . so , the f t is on everything , but the filters , ignore the lowest bins and the highest bins . and what it does is it copies professor c: it 's bark scale , and it 's it actually copies the second filters over to the first . so the first filters are always and you can s you can specify a different number of features different number of filters , as i recall . so you can specify a different number of filters , and whatever you specify , the last ones are gon na be ignored . so that 's a way that you change what the bandwidth is . y you ca n't do it without changing the number of filters , phd e: i saw something about that looked like it was doing something like that , but i did n't quite understand it . so maybe professor c: , so the idea is that the very lowest frequencies and typically the veriest highest frequencies are junk . and so you just for continuity you just approximate them by the second to highest and second to lowest . it 's just a simple thing we put in . and and so if you h professor c: but see my point ? if you had ten filters , then you would be throwing away a lot at the two ends . and if you had fifty filters , you 'd be throwing away hardly anything . , i do n't remember there being an independent way of saying " we 're just gon na make them from here to here " . professor c: but i , it 's actually been awhile since i ' ve looked at it . phd e: , i went through the feacalc code and then looked at just calling the rasta libs and thing like that . and i did n't i could n't see any wh place where that thing was done . but i did n't quite understand everything that i saw , professor c: but it calls rasta with some options , but in i for some particular database you might find that you could tune that and tweak that to get that a little better , but that in general it 's not that critical . there 's you can throw away below a hundred hertz or so and it 's just not going to affect phonetic classification . phd e: another thing i was thinking about was is there a i was wondering if there 's maybe certain settings of the parameters when you compute plp which would it to output mel cepstrum . so that , in effect , what i could do is use our code but produce mel cepstrum and compare that directly to professor c: , it 's not precisely . what you can do is you can definitely change the filter bank from being a trapezoidal integration to a triangular one , which is what the typical mel cepstral filter bank does . and some people have claimed that they got some better performance doing that , so you certainly could do that easily . but the fundamental difference , there 's other small differences professor c: but , as opposed to the log in the other case . the fundamental d difference that we ' ve seen any difference from before , which is actually an advantage for the p l p i , is that the smoothing at the end is auto - regressive instead of being cepstral , from cepstral truncation . so it 's a little more noise robust . , and that 's why when people started getting databases that had a little more noise in it , like broadcast news and so on , that 's why c cambridge switched to plp . that 's a difference that i do n't think we put any way to get around , since it was an advantage . but we did hear this comment from people at some point , that it they got some better results with the triangular filters rather than the trapezoidal . so that is an option in rasta . and you can certainly play with that . but you 're probably doing the right thing to look for bugs first . phd e: just it just seems like this behavior could be caused by s some of the training data being messed up . phd e: , you 're getting most of the way there , but there 's a so i started going through and looking one of the things that i did notice was that the log likelihoods coming out of the log recognizer from the plp data were much lower , much smaller , than for the mel cepstral , and that the average amount of pruning that was happening was therefore a little bit higher for the plp features . phd e: so , since he used the same exact pruning thresholds for both , i was wondering if it could be that we 're getting more pruning . professor c: he he he used the identical pruning thresholds even though the s the range of p of the likeli that 's that 's a pretty good point right there . i would think that you might wanna do something like , look at a few points to see where you are starting to get significant search errors . phd e: that 's , what i was gon na do is i was gon na take a couple of the utterances that he had run through , then run them through again but modify the pruning threshold and see if it , affects the score . professor c: but you could if that looks promising you could , r run the overall test set with a few different pruning thresholds for both , and presumably he 's running at some pruning threshold that 's , gets very few search errors but is relatively fast phd e: right . , generally in these things you turn back pruning really far , so i did n't think it would be that big a deal because i was figuring you have it turned back so far that it professor c: but you may be in the wrong range for the p l p features for some reason . phd e: and the run time of the recognizer on the plp features is longer which implies that the networks are bushier , there 's more things it 's considering which goes along with the fact that the matches are n't as good . so , it could be that we 're just pruning too much . professor c: , maybe just be different distributions so that 's another possible thing . they they should really should n't there 's no particular reason why they would be exactly behave exactly the same . phd e: - . right . there 's lots of little differences . trying to track it down . professor c: i this was a little bit off topic , i , because i was thinking in terms of th this as being a core item that once we had it going we would use for a number of the front - end things also . wanna grad b: , i tried this mean subtraction method . due to avendano , i ' m taking s six seconds of speech , i ' m using two second fft analysis frames , stepped by a half second so it 's a quarter length step and i take that frame and four f the four i take the current frame and the four past frames and the four future frames and that adds up to six seconds of speech . and i calculate the spectral mean , of the log magnitude spectrum over that n . i use that to normalize the s the current center frame by mean subtraction . and i then i move to the next frame and i do it again . , actually i calculate all the means first and then i do the subtraction . and the i tried that with hdk , the aurora setup of hdk training on clean ti - digits , and it helped in a phony reverberation case where used the simulated impulse response the error rate went from something like eighty it was from something like eighteen percent to four percent . and on meeting rec recorder far mike digits , mike on channel f , it went from forty - one percent error to eight percent error . grad b: right . and that was trained on clean speech only , which i ' m guessing is the reason why the baseline was so bad . professor c: that 's ac actually a little side point is that 's the first results that we have of any sort on the far field data for recorded in meetings . grad b: on the far field also . he did one pzm channel and one pda channel . professor c: did he ? i did n't recall that . what numbers was he getting with that ? grad b: i ' m not , it was about five percent error for the pzm channel . grad b: i ' m g i ' m guessing it was the training data . , clean ti - digits is , like , pretty pristine training data , and if they trained the sri system on this tv broadcast type , it 's a much wider range of channels and it professor c: but a minute . i th he what am i saying here ? so that was the sri system . maybe you 're right . cuz it was getting like one percent so it 's still this ratio . it was it was getting one percent on the near field . was n't it ? professor c: . it was getting around one percent for the near for the n for the close mike . professor c: so it was like one to five so it 's still this ratio . it 's just it 's a lot more training data . so probably it should be something we should try then is to see if is at some point just to take i to transform the data and then use th use it for the sri system . professor c: so you 're so you have a system which for one reason or another is relatively poor , and you have something like forty - one percent error and then you transform it to eight by doing this work . so here 's this other system , which is a lot better , but there 's still this ratio . it 's something like five percent error with the distant mike , and one percent with the close mike . so the question is how close to that one can you get if you transform the data using that system . grad b: r right , so i this sri system is trained on a lot of s broadcast news or switchboard data . is that right ? do which one it is ? phd e: it 's trained on a lot of different things . it 's trained on a lot of switchboard , call home , phd e: a bunch of different sources , some digits , there 's some digits training in there . grad b: o one thing i ' m wondering about is what this mean subtraction method will do if it 's faced with additive noise . cuz i it 's cuz i what log magnitude spectral subtraction is gon na do to additive noise . that 's that 's the professor c: , it 's not exactly the right thing but you ' ve already seen that cuz there is added noise here . grad b: that 's that 's , that 's true . that 's a good point . ok , so it 's then it 's reasonable to expect it would be helpful if we used it with the sri system and professor c: , as helpful so that 's the question . , w we 're often asked this when we work with a system that is n't industry standard great , and we see some reduction in error using some clever method , then , will it work on a on a good system . , this other one 's it was a pretty good system . , one percent word error rate on digits is digit strings is not stellar , but given that this is real digits , as opposed to laboratory professor c: and it was n't trained on this task . actually one percent is , in a reasonable range . people would say " , i could imagine getting that " . and so the four or five percent is quite poor . , if you 're doing a sixteen digit credit card number you 'll get it wrong almost all the time . so . , a significant reduction in the error for that would be great . professor c: alright , i actually have to run . so i do n't think do the digits , but , i 'll leave my microphone on ? professor c: be out of here quickly . that 's have to run for another appointment . ok , i t . i left it on . ###summary: the icsi meeting recorder group met once more to discuss their recent progress in various projects , as well as discuss some of the issues that have arisen in the last week. there has been further work on voiced/unvoiced detection , along with spectral subtraction. the group discussed one members attendance at a conference , and another groups code , which is proving hard to follow. in trying to understand france telecom's code , there does not appear to be a reason for a particular constant value. speaker fn002 reported on her work with mn007 , who was absent attending a conference. they have been working on voiced/unvoiced detection , and have tried two different configuration of mlp , although results have not been improved greatly. she has also begun looking at france telecom's recogniser code. speaker me006 has been coming up with tests for his work using acoustic events. speaker me018 is investigating differences in another system , and feels it maybe down to bugs. speaker me026 has made real progress on his spectral subtraction work , with an implementation and early results.
5
professor a: so y you guys had a meeting with hynek which i unfortunately had to miss . and somebody professor a: so everybody knows what happened except me . ok . maybe somebody should tell me . phd c: alright . first we discussed about some of the points that i was addressing in the mail i sent last week . phd c: so . about the , the downsampling problem . and about the f the length of the filters and phd c: so the fact that there is no low - pass filtering before the downsampling . there is because there is lda filtering but that 's perhaps not the best w m professor a: depends what it 's frequency characteristic is , . so you could do a stricter one . professor a: so again this is th this is the downsampling of the feature vector stream and i the lda filters they were doing do have let 's see , so the feature vectors are calculated every ten milliseconds so the question is how far down they are at fifty hertz . . at twenty - five hertz since they 're downsampling by two . so . does anybody the frequency characteristic is ? phd c: so , . we should have a look first at , perhaps , the modulation spectrum . so there is this , there is the length of the filters . so the i this idea of trying to find filters with shorter delays . we started to work with this . and the third point was the , the on - line normalization where , the recursion f recursion for the mean estimation is a filter with some delay and that 's not taken into account right now . . and there again , . for this , the conclusion of hynek was , " we can try it but " phd c: so try to take into account the delay of the recursion for the mean estimation . and this we ' ve not worked on this yet . , . and so while discussing about these lda filters , some i issues appeared , like , the fact that if we look at the frequency response of these filters it 's , we really what 's the important part in the frequency response and there is the fact that in the very low frequency , these filters do n't really remove a lot . compared to the standard rasta filter . and that 's probably a reason why , on - line normalization helps because it , phd c: , it removed this mean . , but perhaps everything could should be could be in the filter , the mean normalization and so that was that 's all we discussed about . we discussed about good things to do also , generally good to do for the research . and this was this lda tuning perhaps and hynek proposed again to his traps , so . , professor a: i g i the key thing for me is figuring out how to better coordinate between the two sides cuz because i was talking with hynek about it later and the had the sense that neither group of people wanted to bother the other group too much . and and i do n't think anybody is , closed in their thinking or are unwilling to talk about things but that you were waiting for them to tell you that they had something for you and that and expected that they would do certain things and they were sor they did n't wanna bother you and they were waiting for you and we ended up with this thing where they were filling up all of the possible latency themselves , and they just had had n't thought of that . so . it 's true that maybe no one really thought about that this latency thing would be such a strict issue phd c: i what happened really , but i it 's also so the time constraints . because , we discussed about that about this problem and they told us " , we will do all that 's possible to have enough space for a network " but then , perhaps they were too short with the time and phd c: but there was also problem perhaps a problem of communication . now we will try to professor a: so there 's alright . maybe we should just you 're bus other than that you folks are busy doing all the things that you 're trying that we talked about before right ? and this machines are busy and you 're busy let 's let 's , that as we said before that one of the things that we 're imagining is that there will be in the system we end up with there 'll be something to explicitly do something about noise in addition to the other things that we 're talking about and that 's probably the best thing to do . and there was that one email that said that it sounded like things looked very promising up there in terms of they were using ericsson 's approach and in addition to they 're doing some noise removal thing , phd c: so we 're will start to do this also . so carmen is just looking at the ericsson code . phd d: , i modified it , modifying i studied barry 's sim code , more or less . to take @ the first step the spectral subtraction . and we have some the feature for italian database and we will try with this feature with the filter to find the result . but we have n't result until this moment . phd d: but , we are working in this also and maybe try another type of spectral subtraction , i do n't professor a: when you say you do n't have a result yet you mean it 's just that it 's in process or that you it finished and it did n't get a good result ? phd d: no . no , no n we have do the experiment only have the feature but the experiment have we have not make the experiment and maybe will be good result or bad result , we . professor a: so i suggest actually now we sorta move on and hear what 's happening in other areas like what 's happening with your investigations about echos and so on . grad f: i have n't started writing the test yet , i ' m meeting with adam today and he 's going t show me the scripts he has for running recognition on mee meeting recorder digits . i also have n't got the code yet , i have n't asked hynek for the for his code yet . cuz i looked at avendano 's thesis and i do n't really understand what he 's doing yet but it sounded like the channel normalization part of his thesis was done in a bit of i what the word is , a bit of a rough way it sounded like he it was n't really fleshed out and maybe he did something that was interesting for the test situation but i ' m not if it 's what i 'd wanna use so i have to read it more , i do n't really understand what he 's doing yet . professor a: i have n't read it in a while so i ' m not gon na be too much help unless i read it again , professor a: so . the so you , and then you 're also gon na be doing this echo cancelling between the close mounted and the what we 're calling a cheating experiment of sorts between the distant grad f: i ' m ho right . or i ' m hoping espen will do it . grad f: he 's at least planning to do it for the cl close - mike cross - talk and so maybe just take whatever setup he has and use it . professor a: great . actually he should i wonder who else is maybe it 's dan ellis is going to be doing a different cancellation . one of the things that people working in the meeting task wanna get at is they would like to have cleaner close - miked recordings . so this is especially true for the lapel but even for the close - miked cases we 'd like to be able to have other sounds from other people and removed from so when someone is n't speaking you 'd like the part where they 're not speaking to actually be so what they 're talking about doing is using ec echo cancellation - like techniques . it 's not really echo but just taking the input from other mikes and using a an adaptive filtering approach to remove the effect of that other speech . what was it , there was some point where eric or somebody was speaking and he had lots of silence in his channel and i was saying something to somebody else which was in the background and it was not it was recognizing my words , which were the background speech on the close mike . phd b: that was actually my i was wearing the lapel and you were sitting next to me , phd b: and i only said one thing but you were talking and it was picking up all your words . professor a: so they would like clean channels . and for that mmm that purpose they 'd like to pull it out . so i think dan ellis or somebody who was working with him was going to work on that . right ? . and i if we ' ve talked lately about the plans you 're developing that we talked about this morning i do n't remember if we talked about that last week or not , but maybe just a quick reprise of what we were saying this morning . phd b: what about the that mirjam has been doing ? and and s shawn , . so they 're training up nets to try to recognize these acoustic features ? i see . professor a: but that 's all that 's is a certainly relevant study and , what are the features that they 're finding . we have this problem with the overloading of the term " feature " what are the variables , what we 're calling this one , what are the variables that they 're found finding useful for professor a: and that 's certainly one thing to do and we 're gon na try and do something more f more fine than that but so i what , i was trying to remember some of the things we were saying , do you ha still have that ? there 's those that , some of the issues we were talking about was in j just getting a good handle on what " good features " are phd b: what does what did larry saul use for it was the sonorant detector , right ? how did he h how did he do that ? wh - what was his detector ? - . , ok . so how did he combine all these features ? what what r mmm classifier did he right . you were talking about that , . professor a: and the other thing you were talking about is where we get the targets from . so , there 's these issues of what are the variables that you use and do you combine them using the soft " and - or " or you do something , more complicated and then the other thing was so where do you get the targets from ? the initial thing is just the obvious that we 're discussing is starting up with phone labels from somewhere and then doing the transformation . but then the other thing is to do something better and w why do n't you tell us again about this database ? this is the phd b: pierced tongues and you could just mount it to that and they would n't even notice . weld it . zzz . professor a: maybe you could go to these parlors and you could , you know have , reduced rates if you can do the measurements . phd b: i that 's right . you could what you could do is you could sell little rings and with embedded , transmitters in them and things phd b: and there 's a bunch of data that l around , that people have done studies like that w way back i ca n't remember where wisconsin or someplace that used to have a big database of i remember there was this guy at a t - andt , randolph ? or r what was his name ? do you remember that guy ? , researcher at a t - andt a while back that was studying , trying to do speech recognition from these kinds of features . i ca n't remember what his name was . dang . now i 'll think of it . that 's interesting . phd c: is it the guy that was using the pattern of pressure on the tongue or ? phd b: i ca n't remember exactly what he was using , now . but i know remember it had to do with positional parameters professor a: so the only hesitation i had about it since , i have n't see the data is it sounds like it 's continuous variables and a bunch of them . and so i how complicated it is to go from there what you really want are these binary labels , and just a few of them . and maybe there 's a trivial mapping if you wanna do it and it 's e but it i worry a little bit that this is a research project in itself , whereas if you did something instead that like having some manual annotation by , linguistics students , this would there 'd be a limited s set of things that you could do a as per our discussions with john before but the things that you could do , like nasality and voicing and a couple other things you probably could do reasonably . and then there would it would really be this binary variable . course then , that 's the other question is do you want binary variables . the other thing you could do is boot trying to get those binary variables and take the continuous variables from the data itself there , but i ' m not professor a: so anyway that 's that 's another whole direction that cou could be looked at . in general it 's gon na be for new data that you look at , it 's gon na be hidden variable because we 're not gon na get everybody sitting in these meetings to wear the pellets and . so . phd b: so you 're talking about using that data to get instead of using canonical mappings of phones . so you 'd use that data to give you what the true mappings are for each phone ? professor a: so wh , where this fits into the rest in my mind , i , is that we 're looking at different ways that we can combine different kinds of rep front - end representations in order to get robustness under difficult or even , typical conditions . and part of it , this robustness , seems to come from multi - stream or multi - band sorts of things and saul seems to have a reasonable way of looking at it , at least for one articulatory feature . the question is can we learn from that to change some of the other methods we have , since , one of the things that 's about what he had was that it the decision about how strongly to train the different pieces is based on a reasonable criterion with hidden variables rather than just assuming that you should train e every detector with equal strength towards it being this phone or that phone . so it so he 's got these he " and 's " between these different features . it 's a soft " and " , i but in principle you wanna get a strong concurrence of all the different things that indicate something and then he " or 's " across the different soft " or 's " across the different multi - band channels . and the weight , the target for the training of the " and ' ed " things is something that 's kept as a hidden variable , and is learned with em . whereas what we were doing is taking the phone target and then just back propagating from that professor a: which means that it 's i it could be that for a particular point in the data you do n't want to train a particular band train the detectors for a particular band . you you wanna ignore that band , cuz that 's a ban - band is a noisy measure . and we do n't we 're we 're still gon na try to train it up . in our scheme we 're gon na try to train it up to do as it can at predicting . maybe that 's not the right thing to do . professor a: at the tail end , he has to 's where it 's sonorant . but he 's but what he 's - but what he 's not training up what he does n't depend on as truth is professor a: i one way of describing would be if a sound is sonorant is it sonorant in this band ? is it sonorant in that band ? i it 's hard to even answer that what you really mean is that the whole sound is sonorant . so then it comes down to , to what extent should you make use of information from particular band towards making your decision . and we 're making in a sense this hard decision that you should use everything with equal strength . and because in the ideal case we would be going for posterior probabilities , if we had enough data to really get posterior probabilities and if the if we also had enough data so that it was representative of the test data then we would be doing the right thing to train everything as hard as we can . but this is something that 's more built up along an idea of robustness from the beginning and so you do n't necessarily want to train everything up towards the phd b: so where did he get his tar his high - level targets about what 's sonorant and what 's not ? grad e: using timit right , right . and then he does some fine tuning for special cases . professor a: we ha we have a iterative training because we do this embedded viterbi , so there is some something that 's suggested , based on the data but it 's not it s does n't seem like it 's quite the same , cuz of this cuz then whatever that alignment is , it 's that for all bands . no , that 's not quite right , we did actually do them separate tried to do them separately so that would be a little more like what he did . but it 's still not quite the same because then it 's setting targets based on where you would say the sound begins in a particular band . where he 's this is not a labeling per se . might be closer i if we did a soft target embedded neural net training like we ' ve done a few times f the forward do the forward calculations to get the gammas and train on those . mmm . what 's next ? phd b: ? yes , i ' m playing . so i wanted to do this experiment to see what happens if we try to improve the performance of the back - end recognizer for the aurora task and see how that affects things . and so i had this i sent around last week a this plan i had for an experiment , this matrix where i would take the original system . so there 's the original system trained on the mel cepstral features and then com and then optimize the b htk system and run that again . so look at the difference there and then do the same thing for the icsi - ogi front - end . phd b: this is that i looked at ? i ' m looking at the italian right now . so as far as i ' ve gotten is i ' ve been able to go through from beginning to end the full htk system for the italian data and got the same results that stephane had . so i started looking to and now i ' m lookin at the point where i wanna should i change in the htk back - end in order to try to improve it . so . one of the first things of was the fact that they use the same number of states for all of the models and so i went on - line and i found a pronunciation dictionary for italian digits and just looked at , the number of phones in each one of the digits . , the canonical way of setting up a an hmm system is that you use three states per phone and so then the total number of states for a word would just be , the number of phones times three . and so when i did that for the italian digits , i got a number of states , ranging on the low end from nine to the high end , eighteen . now you have to really add two to that because in htk there 's an initial null and a final null so when they use models that have eighteen states , there 're really sixteen states . they ' ve got those initial and final null states . and so their of eighteen states seems to be pretty matched to the two longest words of the italian digits , the four and five which , according to my , off the cuff calculation , should have eighteen states each . and so they had sixteen . so that 's pretty close . but for the most of the words are sh much shorter . so the majority of them wanna have nine states . and so theirs are s twice as long . so my and then if you i printed out a confusion matrix for the - matched case , and it turns out that the longest words are actually the ones that do the best . so my about what 's happening is that , if you assume a fixed the same amount of training data for each of these digits and a fixed length model for all of them but the actual words for some of them are half as long you really have , half as much training data for those models . because if you have a long word and you 're training it to eighteen states , you ' ve got the same number of gaussians , you ' ve got ta train in each case , but for the shorter words , the total number of frames is actually half as many . so it could be that , for the short words there 's because you have so many states , you just do n't have enough data to train all those gaussians . so i ' m going to try to create more word - specific prototype h m ms to start training from . professor a: , it 's not uncommon you do worse on long word on short words than long words anyway just because you 're accumulating more evidence for the longer word , but . phd b: so i 'll , the next experiment i ' m gon na try is to just create models that seem to be more w matched to my about how long they should be . and as part of that i wanted to see how the how these models were coming out , what w when we train up th , the model for " one " , which wants to have nine states , what is the what do the transition probabilities look like in the self - loops , look like in those models ? and so i talked to andreas and he explained to me how you can calculate the expected duration of an hmm just by looking at the transition matrix and so i wrote a little matlab script that calculates that so i ' m gon na print those out for each of the words to see what 's happening , how these models are training up , the long ones versus the short ones . i d i did quickly , i did the silence model that 's coming out with about one point two seconds as its average duration and the silence model 's the one that 's used at the beginning and the end of each of the string of digits . phd b: , . and so the s p model , which is what they put in between digits , i have n't calculated that for that one yet , but . so they their model for a whole digit string is silence digit , sp , digit , sp blah - blah and then silence at the end . and so . phd b: i have to look at that , but i ' m not that they are . now the one thing about the s p model is really it only has a single s emitting state to it . so if it 's not optional , it 's not gon na hurt a whole lot and it 's tied to the center state of the silence model so it 's not its own it does n't require its own training data , it just shares that state . so it , it 's pretty good the way that they have it set up , but i so i wanna play with that a little bit more . i ' m curious about looking at , how these models have trained and looking at the expected durations of the models and i wanna compare that in the - matched case f to the unmatched case , and see if you can get an idea of just from looking at the durations of these models , what 's happening . professor a: , that , as much as you can , it 's good to d not do anything really tricky . not do anything that 's really finely tuned , but just you t you i z the premise is you have a good person look at this for a few weeks and what do you come up with ? and phd b: and hynek , when i wa told him about this , he had an interesting point , and that was th the final models that they end up training up have probably something on the order of six gaussians per state . so they 're fairly , hefty models . and hynek was saying that , probably in a real application , you would n't have enough compute to handle models that are very big or complicated . so what we may want are simpler models . phd b: and compare how they perform to that . but , it depends on what the actual application is and it 's really hard to your limits are in terms of how many gaussians you can have . professor a: and that , at the moment that 's not the limitation , i what you were gon na say i but which i was thinking was where did six come from ? probably came from the same place eighteen came from . , phd b: one thing , if i start reducing the number of states for some of these shorter models that 's gon na reduce the total number of gaussians . so in a sense it 'll be a simpler system . professor a: but right now again the idea is doing just very simple things how much better can you make it ? and since they 're only simple things there 's nothing that you 're gon na do that is going to blow up the amount of computation if you found that nine was better than six that would be o k , actually . does n't have to go down . phd b: i really was n't even gon na play with that part of the system yet , professor a: so what 's i your plan for you you guys ' plan for the next week is just continue on these same things we ' ve been talking about for aurora and phd c: i we can try to have some new baseline for next week perhaps . with all these minor things modified . and then do other things , play with the spectral subtraction , and retry the msg and things like that . professor a: you have a big list of things to do . that 's good . that after all of this confusion settles down in another some point a little later next year there will be some standard and it 'll get out there and hopefully it 'll have some effect from something that has been done by our group of people e even if it does n't there 's go there 'll be standards after that . phd b: does anybody know how to run matlab in batch mode like you c send it s a bunch of commands to run and it gives you the output . is it possible to do that ? grad e: what 's that ? , octave ? it 's free . we have it here r running somewhere . grad e: i it 's a little behind , it 's the same syntax but it 's a little behind in that matlab went to these like you can have cells and you can implement object - oriented type things with matlab . octave does n't do that yet , so you , octave is kinda like matlab four point something or .
the berkeley meeting recorder group met to discuss their recent progress. this included a recap of a meeting with one of the members of their research partner ogi. there were progress reports from group members working on echo cancellation , acoustic feature detection , and htk optimization , along with discussion of many issues arising from this topics. the group must try to communicate more with research partners ogi. there have been communication problems between the group and their partners at ogi , with both groups waiting for the other to be the one to make contact. this has made it hard to coordinate efforts. when looking at articulatory features , the data will be continuous , so may need a mapping to binary decisions. this could end up as a huge research project by itself , so must be careful. mn002 and fe002 are continuing with try to reduce delays and improve the system of the groups main digit recogniser project. this includes look at filers , and trying to remove noise. in the coming weeks they have much to try. speaker me018 has also been looking to improve results on the aurora digit task. more specifically he is looking at the number of states in the hmms representing the digits , and is interested in shortening some of them , making them more word specific and hence more accurate. speaker me026 reports his minimal progress on echo cancellation work , in that he is awaiting code along with a better understanding of the original process he is studying. me006 is looking to combine previous sonorant detection work with other articulatory feature work.
###dialogue: professor a: so y you guys had a meeting with hynek which i unfortunately had to miss . and somebody professor a: so everybody knows what happened except me . ok . maybe somebody should tell me . phd c: alright . first we discussed about some of the points that i was addressing in the mail i sent last week . phd c: so . about the , the downsampling problem . and about the f the length of the filters and phd c: so the fact that there is no low - pass filtering before the downsampling . there is because there is lda filtering but that 's perhaps not the best w m professor a: depends what it 's frequency characteristic is , . so you could do a stricter one . professor a: so again this is th this is the downsampling of the feature vector stream and i the lda filters they were doing do have let 's see , so the feature vectors are calculated every ten milliseconds so the question is how far down they are at fifty hertz . . at twenty - five hertz since they 're downsampling by two . so . does anybody the frequency characteristic is ? phd c: so , . we should have a look first at , perhaps , the modulation spectrum . so there is this , there is the length of the filters . so the i this idea of trying to find filters with shorter delays . we started to work with this . and the third point was the , the on - line normalization where , the recursion f recursion for the mean estimation is a filter with some delay and that 's not taken into account right now . . and there again , . for this , the conclusion of hynek was , " we can try it but " phd c: so try to take into account the delay of the recursion for the mean estimation . and this we ' ve not worked on this yet . , . and so while discussing about these lda filters , some i issues appeared , like , the fact that if we look at the frequency response of these filters it 's , we really what 's the important part in the frequency response and there is the fact that in the very low frequency , these filters do n't really remove a lot . compared to the standard rasta filter . and that 's probably a reason why , on - line normalization helps because it , phd c: , it removed this mean . , but perhaps everything could should be could be in the filter , the mean normalization and so that was that 's all we discussed about . we discussed about good things to do also , generally good to do for the research . and this was this lda tuning perhaps and hynek proposed again to his traps , so . , professor a: i g i the key thing for me is figuring out how to better coordinate between the two sides cuz because i was talking with hynek about it later and the had the sense that neither group of people wanted to bother the other group too much . and and i do n't think anybody is , closed in their thinking or are unwilling to talk about things but that you were waiting for them to tell you that they had something for you and that and expected that they would do certain things and they were sor they did n't wanna bother you and they were waiting for you and we ended up with this thing where they were filling up all of the possible latency themselves , and they just had had n't thought of that . so . it 's true that maybe no one really thought about that this latency thing would be such a strict issue phd c: i what happened really , but i it 's also so the time constraints . because , we discussed about that about this problem and they told us " , we will do all that 's possible to have enough space for a network " but then , perhaps they were too short with the time and phd c: but there was also problem perhaps a problem of communication . now we will try to professor a: so there 's alright . maybe we should just you 're bus other than that you folks are busy doing all the things that you 're trying that we talked about before right ? and this machines are busy and you 're busy let 's let 's , that as we said before that one of the things that we 're imagining is that there will be in the system we end up with there 'll be something to explicitly do something about noise in addition to the other things that we 're talking about and that 's probably the best thing to do . and there was that one email that said that it sounded like things looked very promising up there in terms of they were using ericsson 's approach and in addition to they 're doing some noise removal thing , phd c: so we 're will start to do this also . so carmen is just looking at the ericsson code . phd d: , i modified it , modifying i studied barry 's sim code , more or less . to take @ the first step the spectral subtraction . and we have some the feature for italian database and we will try with this feature with the filter to find the result . but we have n't result until this moment . phd d: but , we are working in this also and maybe try another type of spectral subtraction , i do n't professor a: when you say you do n't have a result yet you mean it 's just that it 's in process or that you it finished and it did n't get a good result ? phd d: no . no , no n we have do the experiment only have the feature but the experiment have we have not make the experiment and maybe will be good result or bad result , we . professor a: so i suggest actually now we sorta move on and hear what 's happening in other areas like what 's happening with your investigations about echos and so on . grad f: i have n't started writing the test yet , i ' m meeting with adam today and he 's going t show me the scripts he has for running recognition on mee meeting recorder digits . i also have n't got the code yet , i have n't asked hynek for the for his code yet . cuz i looked at avendano 's thesis and i do n't really understand what he 's doing yet but it sounded like the channel normalization part of his thesis was done in a bit of i what the word is , a bit of a rough way it sounded like he it was n't really fleshed out and maybe he did something that was interesting for the test situation but i ' m not if it 's what i 'd wanna use so i have to read it more , i do n't really understand what he 's doing yet . professor a: i have n't read it in a while so i ' m not gon na be too much help unless i read it again , professor a: so . the so you , and then you 're also gon na be doing this echo cancelling between the close mounted and the what we 're calling a cheating experiment of sorts between the distant grad f: i ' m ho right . or i ' m hoping espen will do it . grad f: he 's at least planning to do it for the cl close - mike cross - talk and so maybe just take whatever setup he has and use it . professor a: great . actually he should i wonder who else is maybe it 's dan ellis is going to be doing a different cancellation . one of the things that people working in the meeting task wanna get at is they would like to have cleaner close - miked recordings . so this is especially true for the lapel but even for the close - miked cases we 'd like to be able to have other sounds from other people and removed from so when someone is n't speaking you 'd like the part where they 're not speaking to actually be so what they 're talking about doing is using ec echo cancellation - like techniques . it 's not really echo but just taking the input from other mikes and using a an adaptive filtering approach to remove the effect of that other speech . what was it , there was some point where eric or somebody was speaking and he had lots of silence in his channel and i was saying something to somebody else which was in the background and it was not it was recognizing my words , which were the background speech on the close mike . phd b: that was actually my i was wearing the lapel and you were sitting next to me , phd b: and i only said one thing but you were talking and it was picking up all your words . professor a: so they would like clean channels . and for that mmm that purpose they 'd like to pull it out . so i think dan ellis or somebody who was working with him was going to work on that . right ? . and i if we ' ve talked lately about the plans you 're developing that we talked about this morning i do n't remember if we talked about that last week or not , but maybe just a quick reprise of what we were saying this morning . phd b: what about the that mirjam has been doing ? and and s shawn , . so they 're training up nets to try to recognize these acoustic features ? i see . professor a: but that 's all that 's is a certainly relevant study and , what are the features that they 're finding . we have this problem with the overloading of the term " feature " what are the variables , what we 're calling this one , what are the variables that they 're found finding useful for professor a: and that 's certainly one thing to do and we 're gon na try and do something more f more fine than that but so i what , i was trying to remember some of the things we were saying , do you ha still have that ? there 's those that , some of the issues we were talking about was in j just getting a good handle on what " good features " are phd b: what does what did larry saul use for it was the sonorant detector , right ? how did he h how did he do that ? wh - what was his detector ? - . , ok . so how did he combine all these features ? what what r mmm classifier did he right . you were talking about that , . professor a: and the other thing you were talking about is where we get the targets from . so , there 's these issues of what are the variables that you use and do you combine them using the soft " and - or " or you do something , more complicated and then the other thing was so where do you get the targets from ? the initial thing is just the obvious that we 're discussing is starting up with phone labels from somewhere and then doing the transformation . but then the other thing is to do something better and w why do n't you tell us again about this database ? this is the phd b: pierced tongues and you could just mount it to that and they would n't even notice . weld it . zzz . professor a: maybe you could go to these parlors and you could , you know have , reduced rates if you can do the measurements . phd b: i that 's right . you could what you could do is you could sell little rings and with embedded , transmitters in them and things phd b: and there 's a bunch of data that l around , that people have done studies like that w way back i ca n't remember where wisconsin or someplace that used to have a big database of i remember there was this guy at a t - andt , randolph ? or r what was his name ? do you remember that guy ? , researcher at a t - andt a while back that was studying , trying to do speech recognition from these kinds of features . i ca n't remember what his name was . dang . now i 'll think of it . that 's interesting . phd c: is it the guy that was using the pattern of pressure on the tongue or ? phd b: i ca n't remember exactly what he was using , now . but i know remember it had to do with positional parameters professor a: so the only hesitation i had about it since , i have n't see the data is it sounds like it 's continuous variables and a bunch of them . and so i how complicated it is to go from there what you really want are these binary labels , and just a few of them . and maybe there 's a trivial mapping if you wanna do it and it 's e but it i worry a little bit that this is a research project in itself , whereas if you did something instead that like having some manual annotation by , linguistics students , this would there 'd be a limited s set of things that you could do a as per our discussions with john before but the things that you could do , like nasality and voicing and a couple other things you probably could do reasonably . and then there would it would really be this binary variable . course then , that 's the other question is do you want binary variables . the other thing you could do is boot trying to get those binary variables and take the continuous variables from the data itself there , but i ' m not professor a: so anyway that 's that 's another whole direction that cou could be looked at . in general it 's gon na be for new data that you look at , it 's gon na be hidden variable because we 're not gon na get everybody sitting in these meetings to wear the pellets and . so . phd b: so you 're talking about using that data to get instead of using canonical mappings of phones . so you 'd use that data to give you what the true mappings are for each phone ? professor a: so wh , where this fits into the rest in my mind , i , is that we 're looking at different ways that we can combine different kinds of rep front - end representations in order to get robustness under difficult or even , typical conditions . and part of it , this robustness , seems to come from multi - stream or multi - band sorts of things and saul seems to have a reasonable way of looking at it , at least for one articulatory feature . the question is can we learn from that to change some of the other methods we have , since , one of the things that 's about what he had was that it the decision about how strongly to train the different pieces is based on a reasonable criterion with hidden variables rather than just assuming that you should train e every detector with equal strength towards it being this phone or that phone . so it so he 's got these he " and 's " between these different features . it 's a soft " and " , i but in principle you wanna get a strong concurrence of all the different things that indicate something and then he " or 's " across the different soft " or 's " across the different multi - band channels . and the weight , the target for the training of the " and ' ed " things is something that 's kept as a hidden variable , and is learned with em . whereas what we were doing is taking the phone target and then just back propagating from that professor a: which means that it 's i it could be that for a particular point in the data you do n't want to train a particular band train the detectors for a particular band . you you wanna ignore that band , cuz that 's a ban - band is a noisy measure . and we do n't we 're we 're still gon na try to train it up . in our scheme we 're gon na try to train it up to do as it can at predicting . maybe that 's not the right thing to do . professor a: at the tail end , he has to 's where it 's sonorant . but he 's but what he 's - but what he 's not training up what he does n't depend on as truth is professor a: i one way of describing would be if a sound is sonorant is it sonorant in this band ? is it sonorant in that band ? i it 's hard to even answer that what you really mean is that the whole sound is sonorant . so then it comes down to , to what extent should you make use of information from particular band towards making your decision . and we 're making in a sense this hard decision that you should use everything with equal strength . and because in the ideal case we would be going for posterior probabilities , if we had enough data to really get posterior probabilities and if the if we also had enough data so that it was representative of the test data then we would be doing the right thing to train everything as hard as we can . but this is something that 's more built up along an idea of robustness from the beginning and so you do n't necessarily want to train everything up towards the phd b: so where did he get his tar his high - level targets about what 's sonorant and what 's not ? grad e: using timit right , right . and then he does some fine tuning for special cases . professor a: we ha we have a iterative training because we do this embedded viterbi , so there is some something that 's suggested , based on the data but it 's not it s does n't seem like it 's quite the same , cuz of this cuz then whatever that alignment is , it 's that for all bands . no , that 's not quite right , we did actually do them separate tried to do them separately so that would be a little more like what he did . but it 's still not quite the same because then it 's setting targets based on where you would say the sound begins in a particular band . where he 's this is not a labeling per se . might be closer i if we did a soft target embedded neural net training like we ' ve done a few times f the forward do the forward calculations to get the gammas and train on those . mmm . what 's next ? phd b: ? yes , i ' m playing . so i wanted to do this experiment to see what happens if we try to improve the performance of the back - end recognizer for the aurora task and see how that affects things . and so i had this i sent around last week a this plan i had for an experiment , this matrix where i would take the original system . so there 's the original system trained on the mel cepstral features and then com and then optimize the b htk system and run that again . so look at the difference there and then do the same thing for the icsi - ogi front - end . phd b: this is that i looked at ? i ' m looking at the italian right now . so as far as i ' ve gotten is i ' ve been able to go through from beginning to end the full htk system for the italian data and got the same results that stephane had . so i started looking to and now i ' m lookin at the point where i wanna should i change in the htk back - end in order to try to improve it . so . one of the first things of was the fact that they use the same number of states for all of the models and so i went on - line and i found a pronunciation dictionary for italian digits and just looked at , the number of phones in each one of the digits . , the canonical way of setting up a an hmm system is that you use three states per phone and so then the total number of states for a word would just be , the number of phones times three . and so when i did that for the italian digits , i got a number of states , ranging on the low end from nine to the high end , eighteen . now you have to really add two to that because in htk there 's an initial null and a final null so when they use models that have eighteen states , there 're really sixteen states . they ' ve got those initial and final null states . and so their of eighteen states seems to be pretty matched to the two longest words of the italian digits , the four and five which , according to my , off the cuff calculation , should have eighteen states each . and so they had sixteen . so that 's pretty close . but for the most of the words are sh much shorter . so the majority of them wanna have nine states . and so theirs are s twice as long . so my and then if you i printed out a confusion matrix for the - matched case , and it turns out that the longest words are actually the ones that do the best . so my about what 's happening is that , if you assume a fixed the same amount of training data for each of these digits and a fixed length model for all of them but the actual words for some of them are half as long you really have , half as much training data for those models . because if you have a long word and you 're training it to eighteen states , you ' ve got the same number of gaussians , you ' ve got ta train in each case , but for the shorter words , the total number of frames is actually half as many . so it could be that , for the short words there 's because you have so many states , you just do n't have enough data to train all those gaussians . so i ' m going to try to create more word - specific prototype h m ms to start training from . professor a: , it 's not uncommon you do worse on long word on short words than long words anyway just because you 're accumulating more evidence for the longer word , but . phd b: so i 'll , the next experiment i ' m gon na try is to just create models that seem to be more w matched to my about how long they should be . and as part of that i wanted to see how the how these models were coming out , what w when we train up th , the model for " one " , which wants to have nine states , what is the what do the transition probabilities look like in the self - loops , look like in those models ? and so i talked to andreas and he explained to me how you can calculate the expected duration of an hmm just by looking at the transition matrix and so i wrote a little matlab script that calculates that so i ' m gon na print those out for each of the words to see what 's happening , how these models are training up , the long ones versus the short ones . i d i did quickly , i did the silence model that 's coming out with about one point two seconds as its average duration and the silence model 's the one that 's used at the beginning and the end of each of the string of digits . phd b: , . and so the s p model , which is what they put in between digits , i have n't calculated that for that one yet , but . so they their model for a whole digit string is silence digit , sp , digit , sp blah - blah and then silence at the end . and so . phd b: i have to look at that , but i ' m not that they are . now the one thing about the s p model is really it only has a single s emitting state to it . so if it 's not optional , it 's not gon na hurt a whole lot and it 's tied to the center state of the silence model so it 's not its own it does n't require its own training data , it just shares that state . so it , it 's pretty good the way that they have it set up , but i so i wanna play with that a little bit more . i ' m curious about looking at , how these models have trained and looking at the expected durations of the models and i wanna compare that in the - matched case f to the unmatched case , and see if you can get an idea of just from looking at the durations of these models , what 's happening . professor a: , that , as much as you can , it 's good to d not do anything really tricky . not do anything that 's really finely tuned , but just you t you i z the premise is you have a good person look at this for a few weeks and what do you come up with ? and phd b: and hynek , when i wa told him about this , he had an interesting point , and that was th the final models that they end up training up have probably something on the order of six gaussians per state . so they 're fairly , hefty models . and hynek was saying that , probably in a real application , you would n't have enough compute to handle models that are very big or complicated . so what we may want are simpler models . phd b: and compare how they perform to that . but , it depends on what the actual application is and it 's really hard to your limits are in terms of how many gaussians you can have . professor a: and that , at the moment that 's not the limitation , i what you were gon na say i but which i was thinking was where did six come from ? probably came from the same place eighteen came from . , phd b: one thing , if i start reducing the number of states for some of these shorter models that 's gon na reduce the total number of gaussians . so in a sense it 'll be a simpler system . professor a: but right now again the idea is doing just very simple things how much better can you make it ? and since they 're only simple things there 's nothing that you 're gon na do that is going to blow up the amount of computation if you found that nine was better than six that would be o k , actually . does n't have to go down . phd b: i really was n't even gon na play with that part of the system yet , professor a: so what 's i your plan for you you guys ' plan for the next week is just continue on these same things we ' ve been talking about for aurora and phd c: i we can try to have some new baseline for next week perhaps . with all these minor things modified . and then do other things , play with the spectral subtraction , and retry the msg and things like that . professor a: you have a big list of things to do . that 's good . that after all of this confusion settles down in another some point a little later next year there will be some standard and it 'll get out there and hopefully it 'll have some effect from something that has been done by our group of people e even if it does n't there 's go there 'll be standards after that . phd b: does anybody know how to run matlab in batch mode like you c send it s a bunch of commands to run and it gives you the output . is it possible to do that ? grad e: what 's that ? , octave ? it 's free . we have it here r running somewhere . grad e: i it 's a little behind , it 's the same syntax but it 's a little behind in that matlab went to these like you can have cells and you can implement object - oriented type things with matlab . octave does n't do that yet , so you , octave is kinda like matlab four point something or . ###summary: the berkeley meeting recorder group met to discuss their recent progress. this included a recap of a meeting with one of the members of their research partner ogi. there were progress reports from group members working on echo cancellation , acoustic feature detection , and htk optimization , along with discussion of many issues arising from this topics. the group must try to communicate more with research partners ogi. there have been communication problems between the group and their partners at ogi , with both groups waiting for the other to be the one to make contact. this has made it hard to coordinate efforts. when looking at articulatory features , the data will be continuous , so may need a mapping to binary decisions. this could end up as a huge research project by itself , so must be careful. mn002 and fe002 are continuing with try to reduce delays and improve the system of the groups main digit recogniser project. this includes look at filers , and trying to remove noise. in the coming weeks they have much to try. speaker me018 has also been looking to improve results on the aurora digit task. more specifically he is looking at the number of states in the hmms representing the digits , and is interested in shortening some of them , making them more word specific and hence more accurate. speaker me026 reports his minimal progress on echo cancellation work , in that he is awaiting code along with a better understanding of the original process he is studying. me006 is looking to combine previous sonorant detection work with other articulatory feature work.
30
professor e: i was saying hynek 'll be here next week , wednesday through friday , through saturday , and , i wo n't be here thursday and friday . but my suggestion is that , at least for this meeting , people should go ahead , cuz hynek will be here , and , we do n't have any czech accent yet , as far as i know , so there we go . phd f: i do n't really have , anything new . been working on meeting recorder . so . professor e: do you think that would be the case for next week also ? or is , ? what 's your projection on ? cuz the one thing that seems to me we really should try , if you had n't tried it before , because it had n't occurred to me it was an obvious thing is , adjusting the , sca the scaling and , insertion penalty sorta . phd f: i did play with that , actually , a little bit . what happens is , when you get to the noisy , you start getting lots of insertions . phd f: and , so i ' ve tried playing around a little bit with , the insertion penalties and things like that . , it did n't make a whole lot of difference . like for the - matched case , it seemed like it was pretty good . i could do more playing with that , though . and , phd f: and see . yes . , you 're talking about for th for our features . professor e: so , i it 's not the direction that you were working with that we were saying what 's the , what 's the best you can do with mel cepstrum . but , they raised a very valid point , professor e: which , i so , to first order , you have other things you were gon na do , but to first order , i would say that the conclusion is that if you , do , some monkeying around with , the exact htk training and @ with , , how many states and , that it does n't particularly improve the performance . in other words , that even though it sounds pretty dumb , just applying the same number of states to everything , more or less , no matter what language , is n't so bad . right ? and i you had n't gotten to all the experiments you wanted to do with number of gaussians , professor e: but , let 's just if we had to if we had to draw a conclusion on the information we have so far , we 'd say something like that . professor e: , so the next question to ask , which is the one that andreas was dre addressing himself to in the lunch meeting , is , we 're not supposed to adjust the back - end , but anybody using the system would . so , if you were just adjusting the back - end , how much better would you do , in noise ? , because the language scaling and insertion penalties and are probably set to be about right for mel cepstrum . but , they 're probably not set right for these things , particularly these things that look over , larger time windows , in one way or another with lda and klt and neural nets and all these things . in the fa past we ' ve always found that we had to increase the insertion penalty to correspond to such things . that 's , @ that 's a first - order thing that we should try . phd f: so for th so the experiment is to , run our front - end like normal , with the default , insertion penalties and , and then tweak that a little bit and see how much of a difference it makes professor e: so by " our front - end " take , the aurora - two s take some version that stephane has that is , our current best version of something . professor e: , y do n't wanna do this over a hundred different things that they ' ve tried but , for some version that you say is a good one . ? how how much , does it improve if you actually adjust that ? but it is interesting . you say you have for the noisy how about for the mismatched or the medium mismatched conditions ? have you ? when you adjusted those numbers for mel cepstrum , did it ? phd f: , i do n't remember off the top of my head . i did n't even write them down . i do n't remember . i would need to , i did write down , so , when i was doing wrote down some numbers for the - matched case . looking at the i wrote down what the deletions , substitutions , and insertions were , for different numbers of states per phone . , but , that 's all i wrote down . i would need to do that . professor e: and , also , sometimes if you run behind on some of these things , maybe we can get someone else to do it and you can supervise . but but it would be it 'd be good to know that . phd f: need to get , front - end , from you or you point me to some files that you ' ve already calculated . phd f: i probably will have time to do that and time to play a little bit with the silence model . professor e: cuz , the other that , might have been part of what , the difference was at least part of it that we were seeing . remember we were seeing the sri system was so much better than the tandem system . part of it might just be that the sri system , they always adjust these things to be optimized , phd f: i wonder if there 's anything that we could do to the front - end that would affect the insertion professor e: , part of what 's going on , is the , the range of values . so , if you have something that has a much smaller range or a much larger range , and taking the appropriate root . if something is like the equivalent of a bunch of probabilities multiplied together , you can take a root of some sort . if it 's like seven probabilities together , you can take the seventh root of it , or if it 's in the log domain , divide it by seven . but , that has a similar effect because it changes the scale of the numbers of the differences between different candidates from the acoustic model phd f: so that w so , in effect , that 's changing the value of your insertion penalty . professor e: , it 's more directly like the language scaling or the , the model scaling or acoustic scaling , professor e: but that those things have a similar effect to the insertion penalty anyway . they 're a slightly different way of handling it . phd f: so if we the insertion penalty is , then we can get an idea about what range our number should be in , professor e: so that 's why that 's another reason other than curiosity as to why i it would be kinda neat to find out if we 're way off . , the other thing is , are are n't we seeing ? y y i ' m you ' ve already looked at this bu in these noisy cases , are ? we are seeing lots of insertions . the insertion number is quite high ? i know the vad takes pre care of part of that , but phd f: i ' ve seen that with the mel cepstrum . i do n't i about the aurora front - end , but phd b: it 's much more balanced with , when the front - end is more robust . i could look at it at this . professor e: i ' m it 's more balanced , but it would n't surprise me if there 's still , in the old systems we used to do , i remember numbers like insertions being half the number of deletions , as being and both numbers being tend to be on the small side comparing to , substitutions . phd f: , this the whole problem with insertions was what , we talked about when the guy from ogi came down that one time and that was when people were saying , we should have a , voice activity detector that , because all that we 're getting thr the silence that 's getting through is causing insertions . professor e: and it may be less of a critical thing . , the fact that some get by may be less of a critical thing if you , get things in the right range . so , the insertions is a symptom . it 's a symptom that there 's something , wrong with the range . but there 's , your substitutions tend to go up as . so , i that , the most obvious thing is just the insertions , @ . if you 're operating in the wrong range , that 's why just in general , if you change what these penalties and scaling factors are , you reach some point that 's a minimum . so . we do have to do over a range of different conditions , some of which are noisier than others . but , we may get a better handle on that if we see , we ca it 's if we actually could pick a more stable value for the range of these features , it , , could even though it 's true that in a real situation you can adjust the these scaling factors in the back - end , and it 's ar artificial here that we 're not adjusting those , you certainly do n't wanna be adjusting those all the time . and if you have a front - end that 's in roughly the right range i remember after we got our more or less together in the previous systems we built , that we tended to set those scaling factors at a standard level , and we would rarely adjust them again , even though you could get a for an evaluation you can get an extra point if you tweaked it a little bit . but , once we knew what rou roughly the right operating range was , it was pretty stable , and , we might just not even be in the right operating range . phd f: so , would the ? , would a good idea be to try to map it into the same range that you get in the - matched case ? so , if we computed what the range was in - matched , and then when we get our noisy conditions out we try to make it have the same range as ? professor e: no . you do n't wanna change it for different conditions . no . i what what i ' m saying phd f: , i was n't suggesting change it for different conditions . i was just saying that when we pick a range , we wanna pick a range that we map our numbers into we should probably pick it based on the range that we get in the - matched case . otherwise , what range are we gon na choose to map everything into ? professor e: . it depends how much we wanna do gamesmanship and how much we wanna do , i if he it to me , actually , even if you wanna be play on the gamesmanship side , it can be kinda tricky . so , what you would do is set the scaling factors , so that you got the best number for this point four five times the , and so on . but they might change that those weightings . sorta think we need to explore the space . just take a look at it a little bit . and we may just find that we 're way off . maybe we 're not . as for these other things , it may turn out that , it 's reasonable . but then , andreas gave a very reasonable response , and he 's probably not gon na be the only one who 's gon na say this in the future of , people within this tight - knit community who are doing this evaluation are accepting , more or less , that these are the rules . but , people outside of it who look in at the broader picture are certainly gon na say " , a minute . you 're doing all this standing on your head , on the front - end , when all you could do is just adjust this in the back - end with one s one knob . " and so we have to at least , determine that 's not true , which would be ok , or determine that it is true , in which case we want to adjust that and then continue with what we 're doing . and as you say as you point out finding ways to then compensate for that in the front - end also then becomes a priority for this particular test , and saying you do n't have to do that . what 's new with you ? professor e: you what 's old with you that has developed over the last week or two ? phd f: how about that ? any - anything new on the thing that , you were working on with the , ? phd c: , to try to found , nnn , robust feature for detect between voice and unvoice . and we w we try to use the variance of the es difference between the fft spectrum and mel filter bank spectrum . , also the another parameter is relates with the auto - correlation function . r - ze energy and the variance a also of the auto - correlation function . professor e: so , that 's . that 's what you were describing , i , a week or two ago . phd c: but we do n't have res we do n't have result of the auro for aurora yet . we need to train the neural network and phd c: , we work in the report , too , because we have a lot of result , they are very dispersed , and was necessary to look in all the directory to give some more structure . professor e: so . b so i if summarize , what 's going on is that you 're going over a lot of material that you have generated in furious fashion , f generating many results and doing many experiments and trying to pull it together into some coherent form to be able to see wha see what happens . phd f: is this a report that 's for aurora ? or is it just like a tech report for icsi , professor e: so , my suggestion , though , is that you not necessarily finish that . but that you put it all together so that it 's you ' ve got a clearer structure to it . what things are , you have things documented , you ' ve looked things up that you needed to look up . so that , so that such a thing can be written . when when do you leave again ? professor e: first of july ? and that you figure on actually finishing it in june . because , you 're gon na have another bunch of results to fit in there anyway . professor e: so so , it 's good to pause , and to gather everything together and make it 's in good shape , so that other people can get access to it and so that it can go into a report in june . but to really work on fine - tuning the report n at this point is probably bad timing , . phd b: , we did n't we just planned to work on it one week on this report , not no more , anyway . professor e: but you ma you may really wanna add other things later anyway because you there 's more to go ? phd f: are you discovering anything , that makes you scratch your head as you write this report , like why did we do that , or why did n't we do this , phd b: actually , there were some tables that were also with partial results . we just noticed that , wh while gathering the result that for some conditions we did n't have everything . but anyway . , . we have , extracted actually the noises from the speechdat - car . and so , we can train neural network with speech and these noises . it 's difficult to say what it will give , because when we look at the aurora the ti - digits experiments , they have these three conditions that have different noises , and this system perform as on the seen noises on the unseen noises and on the seen noises . but , this is something we have to try anyway . so adding the noises from the speechdat - car . phd b: ogi does did that . at some point they did that for the voice activity detector . phd b: they used some parts of the , italian database to train the voice activity detector , . it professor e: . i that 's a matter of interpretation . the rules as i understand it , is that in principle the italian and the spanish and the english no , italian and the finnish and the english ? were development data professor e: on which you could adjust things . and the and the german and danish were the evaluation data . and then when they finally actually evaluated things they used everything . professor e: and it is true that the performance , on the german was , even though the improvement was n't so good , the pre the raw performance was really pretty good . and , it does n't appear that there 's strong evidence that even though things were somewhat tuned on those three or four languages , that going to a different language really hurt you . and the noises were not exactly the same . because it was taken from a different , they were different drives . professor e: , it was actual different cars and so on . , it 's somewhat tuned . it 's tuned more than , a you 'd really like to have something that needed no particular noise , maybe just some white noise like that a at most . but that 's not really what this contest is . , i it 's ok . that 's something i 'd like to understand before we actually use something from it , phd f: it 's probably something that , mmm , the , experiment designers did n't really think about , because most people are n't doing trained systems , or , , systems that are like ours , where you actually use the data to build models . , they just doing signal - processing . professor e: , it 's true , except that , that 's what we used in aurora one , and then they designed the things for aurora - two knowing that we were doing that . phd f: that 's true . and they did n't forbid us right ? to build models on the data ? professor e: but , i think that it probably would be the case that if , say , we trained on italian , data and then , we tested on danish data and it did terribly , that it would look bad . and someone would notice and would say " , look . this is not generalizing . " i would hope tha i would hope they would . but , it 's true . , maybe there 's parameters that other people have used , th that they have tuned in some way for other things . so it 's , we should we should maybe that 's maybe a topic especially if you talk with him when i ' m not here , that 's a topic you should discuss with hynek to , double check it 's ok . phd f: , i was thinking about things like , gender - specific nets and , vocal tract length normalization . things like that . i d i do n't i did n't information we have about the speakers that we could try to take advantage of . professor e: , again , i if you had the whole system you were optimizing , that would be easy to see . but if you 're supposedly just using a fixed back - end and you 're just coming up with a feature vector , w i ' m not , having the two nets suppose you detected that it was male , it was female you come up with different professor e: , it 's an interesting thought . maybe having something along the , you ca n't really do vocal tract normalization . but something that had some of that effect being applied to the data in some way . phd f: no . i had n't thought it was thought too much about it , really . it just something that popped into my head just now . and so i , you could maybe use the ideas a similar idea to what they do in vocal tract length normalization . , you have some a , general speech model , maybe just a mixture of gaussians that you evaluate every utterance against , and then you see where each , utterance like , the likelihood of each utterance . you divide the range of the likelihoods up into discrete bins and then each bin 's got some knob , setting . professor e: . but just listen to yourself . , that really does n't sound like a real - time thing with less than two hundred milliseconds , latency that and where you 're not adjusting the statistical engine . , that just professor e: not just expensive . i do n't see how you could possibly do it . you ca n't look at the whole utterance and do anything . , you can only each frame comes in and it 's got ta go out the other end . phd f: so whatever it was , it would have to be on a per frame basis . professor e: , you can do , fairly quickly you can do male female f male female . but as far as , like bbn did a thing with , vocal tract normalization a ways back . maybe other people did too . with with , l trying to identify third formant average third formant using that as an indicator of , third formant i if you imagine that to first order what happens with , changing vocal tract is that , the formants get moved out by some proportion so , if you had a first formant that was one hundred hertz before , if the fifty if the vocal tract is fifty percent shorter , then it would be out at seven fifty hertz , and so on . so , that 's a move of two hundred fifty hertz . whereas the third formant which might have started off at twenty - five hundred hertz , might be out to thirty - seven fifty , so it 's at although , you frequently get less distinct higher formants , it 's still third formant 's a reasonable compromise , so , , if i recall correctly , they did something like that . and and but , that does n't work for just having one frame . ? that 's more like looking at third formant over a turn like that , but on the other hand , male female is a much simpler categorization than figuring out a factor to , squish or expand the spectrum . so , . y you could imagine that , just like we 're saying voiced - unvoiced is good to know , male female is good to know also . but , you 'd have to figure out a way to , incorporate it on the fly . , i , as you say , one thing you could do is simply , have the male and female output vectors , tr nets trained only on males and n trained only on females or , i if that would really help , because you already have males and females and it 's - putting into one net . is it ? phd b: . so , this noise , . the msg there is something perhaps , i could spend some days to look at this thing , cuz it seems that when we train networks on let 's say , on timit with msg features , they look as good as networks trained on plp . when they are used on the speechdat - car data , it 's not the case the msg features are much worse , and so maybe they 're , less more sensitive to different recording conditions , or shou professor e: should n't be . they should be less so . r right ? wh - ? but let me ask you this . what what 's the , ? do you kno recall if the insertions were higher with msg ? phd b: i . i can not tell . it 's it the error rate is higher . so , i don professor e: . but you should always look at insertions , deletions , and substitutions . so , msg is very , very dif , plp is very much like mel cepstrum . msg is very different from both of them . so , if it 's very different , then this is the thing i ' m really glad andreas brought this point up . i had forgotten to discuss it . you always have to look at how this , these adjustments , affect things . and even though we 're not allowed to do that , again we maybe could reflect that back to our use of the features . so if it if , the problem might be that the range of the msg features is quite different than the range of the plp or mel cepstrum . and you might wanna change that . phd b: but , it 's d it 's after , it 's tandem features , so we we have estimation of post posteriors with plp and with msg as input , so i don professor e: that means they 're between zero and one . but i it does n't necessarily , they could be , do - does n't tell you what the variance of the things is . cuz if you 're taking the log of these things , it could be , knowing what the sum of the probabilities are , does n't tell you what the sum of the logs are . phd b: so we should look at the likelihood , or what ? or at the log , perhaps , professor e: or what , what you 're the thing you 're actually looking at . so your the values that are actually being fed into htk . what do they look like ? phd f: no and so th the , for the tandem system , the values that come out of the net do n't go through the sigmoid . right ? they 're the pre - nonlinearity values ? professor e: , almost . but then you actually do a klt on them . they are n't normalized after that , are they ? professor e: so the question is . whatever they are at that point , are they something for which taking a square root or cube root or fourth root like that is gon na be a good or a bad thing ? , and that 's something that nothing else after that is gon na , things are gon na scale it , subtract things from it , scale it from it , but nothing will have that same effect . anyway , phd f: cuz if the log probs that are coming out of the msg are really big , the standard insertion penalty is gon na have very little effect professor e: no . again you do n't really look at that . it 's something that , and then it 's going through this transformation that 's probably pretty close to it 's , whatever the klt is doing . but it 's probably pretty close to what a discrete cosine transformation is doing . but still it 's not gon na probably radically change the scale of things . i would think . it may be entirely off and it may be at the very least it may be quite different for msg than it is for mel cepstrum or plp . so that would be so the first thing i 'd look at without adjusting anything would just be to go back to the experiment and look at the , substitutions , insertions , and deletions . and if the , i if there 's a fairly large effect of the difference , say , the r ratio between insertions and deletions for the two cases then that would be , an indicator that it might be in that direction . phd b: my point was more that it works sometimes but sometimes it does n't work . so . and it works on ti - digits and on speechdat - car it does n't work , professor e: but , some problems are harder than others , and , sometimes , there 's enough evidence for something to work and then it 's harder , it breaks . so it 's but it but , i it could be that when you say it works maybe we could be doing much better , even in ti - digits . phd b: there is also the spectral subtraction , which , maybe we should , try to integrate it in our system . phd b: that would involve to mmm use a big a al already a big bunch of the system of ericsson . because he has spectral subtraction , then it 's followed by , other processing that 's are dependent on the , if it 's speech or noi or silence . and there is this spectral flattening after if it 's silence , and s it 's important , to reduce this musical noise and this increase of variance during silence portions . this was in this would involve to take almost everything from the this proposal and then just add some on - line normalization in the neural network . professor e: , this 'll be , something for discussion with hynek next week . how are , how are things going with what you 're doing ? grad d: , i took a lot of time just getting my taxes out of the way multi - national taxes . so , i ' m starting to write code now for my work but i do n't have any results yet . , i it would be good for me to talk to hynek , when he 's here . do what his schedule will be like ? professor e: , we 'll have a lot of time . i 'll , he 's he 'll be talking with everybody in this room professor e: not thursday and friday . cuz i will be at faculty retreat . i 'll try to connect with him and people as on wednesday . , how 'd taxes go ? taxes go ok ? professor e: . , good . that 's just that 's one of the big advantages of not making much money is the taxes are easier . grad d: ye , i 'll still have a bit of canadian income but it 'll be less complicated because i will not be a considered a resident of canada anymore , so i wo n't have to declare my american income on my canadian return . grad a: , right . , continuing looking at , ph , phonetic events , and , this tuesday gon na be , meeting with john ohala with chuck to talk some more about these , ph , phonetic events . , came up with , a plan of attack , gon na execute , and it 's that 's it . grad a: i was hoping i could wave my hands . so , once wa i was thinking getting us a set of acoustic events to , to be able to distinguish between , phones and words and . and , once we would figure out a set of these events that can be , , hand - labeled or derived , from h the hand - labeled phone targets . , we could take these events and , do some cheating experiments , where we feed , these events into an sri system , , and evaluate its performance on a switchboard task . , . grad a: , give you an example of twenty - odd events . so , he in this paper , it 's talking about phoneme recognition using acoustic events . so , things like frication or , nasality . phd f: . the just to expand a little bit on the idea of acoustic event . there 's , in my mind , anyways , there 's a difference between , acoustic features and acoustic events . and of acoustic features as being , things that linguists talk about , like , phd f: that 's not based on , acoustic data . so they talk about features for phones , like , its height , its tenseness , laxness , things like that , which may or may not be all that easy to measure in the acoustic signal . versus an acoustic event , which is just { nonvocalsound } some { nonvocalsound } something in the acoustic signal { nonvocalsound } that is fairly easy to measure . so it 's , it 's a little different , in at least in my mind . professor e: , when we did the spam work , there we had this notion of an , auditory @ auditory event . professor e: called them " avents " , , with an a at the front . and the idea was something that occurred that is important to a bunch of neurons somewhere . a sudden change or a relatively rapid change in some spectral characteristic will do this . , there 's certainly a bunch of places where that neurons are gon na fire because something novel has happened . that was that was the main thing that we were focusing on there . but there 's certainly other things beyond what we talked about there that are n't just rapid changes , phd f: it 's kinda like the difference between top - down and bottom - up . of the acoustic , phonetic features as being top - down . , you look at the phone and you say this phone is supposed to be , have this feature , and this feature . whether tha those features show up in the acoustic signal is irrelevant . whereas , an acoustic event goes the other way . here 's the signal . here 's some event . what ? and then that , that may map to this phone sometimes , and sometimes it may not . it just depen maybe depends on the context , things like that . and so it 's a different way of looking . grad a: so . using these events , , we can perform these , cheating experiments . see how good they are , in terms of phoneme recognition or word recognition . and then from that point on , i would , s design robust event detectors , in a similar , wa spirit that saul has done w , with his graphical models , and this probabilistic and - or model that he uses . , try to extend it to , to account for other phenomena like , cmr co - modulation release . and maybe also investigate ways to modify the structure of these models , in a data - driven way , similar to the way that , jeff , bilmes did his work . and while i ' m doing these , event detectors , ma mea measure my progress by comparing , the error rates in clean and noisy conditions to something like , neural nets . so so , once we have these , event detectors , we could put them together and feed the outputs of the event detectors into the sri , hmm system , and test it on switchboard or , maybe even aurora . that 's the big picture of , the plan . professor e: there 's , a couple people who are gon na be here i forget if i already told you this , but , a couple people who are gon na be here for six months . , there 's a professor kollmeier , from germany who 's , quite big in the , hearing - aid signal - processing area and , michael kleinschmidt , who 's worked with him , who also looks at auditory properties inspired by various , brain function things . , they 'll be interesting to talk to , in this issue as these detectors are , developing . so , he looks at interesting things in the different ways of looking at spectra in order to get various speech properties out . short meeting , but that 's ok . we might as do our digits . and like i say , i encourage you to go ahead and meet , next week with , hynek . alright , i 'll start . it 's , one thirty - five . seventeen ok
although the members of icsi's meeting recorder group at berkeley had little progress to report , there were still a number of issues relating to their work to discuss. these included making plans for upcoming experiments , clarifying definitions , and approaches which may or may not be against the rules of the aurora project , alongside alternatives that would not be. there was also debate about the necessary continuation of a group report. plans were also made with regard to a visitor from research partner ogi for next weeks meeting , speaker me018 will provide numbers on his experiments into adjusting insertion penalties. speaker me013 feels that mn007 and fn002 should just put together the coherent bare-bones of their report , and move back to experimenting , leaving the report for the end of the project. he also feels that they should discuss some aspects of future work , for clarity's sake , with the visitor from ogi. the main project the group is working on , aurora , has a number of rules attaches as to what developers can and cannot play with , but this needs to be clarified. the rules are adhered to in the small community , but make no sense from a broader research perspective. while writing their report , mn007 and fn002 have noticed some tables contain only partial results , and there are things they do not recall the reasoning behind. speaker me018 has done a little initial research into the next area he is to look at , adjusting the scaling and the insertion penalties. playing with the latter made little difference , though was coming from a different feature set. speakers mn007 and fn002 have been working on their report , logically writing up everything they have done so far. mn007 has also been working with a new dataset , preparing it for use as a more realistic source of noise , though it is not clear if this is allowed. speaker me006 has continued to look at phonetic events , and has come up with a plan for his future work.
###dialogue: professor e: i was saying hynek 'll be here next week , wednesday through friday , through saturday , and , i wo n't be here thursday and friday . but my suggestion is that , at least for this meeting , people should go ahead , cuz hynek will be here , and , we do n't have any czech accent yet , as far as i know , so there we go . phd f: i do n't really have , anything new . been working on meeting recorder . so . professor e: do you think that would be the case for next week also ? or is , ? what 's your projection on ? cuz the one thing that seems to me we really should try , if you had n't tried it before , because it had n't occurred to me it was an obvious thing is , adjusting the , sca the scaling and , insertion penalty sorta . phd f: i did play with that , actually , a little bit . what happens is , when you get to the noisy , you start getting lots of insertions . phd f: and , so i ' ve tried playing around a little bit with , the insertion penalties and things like that . , it did n't make a whole lot of difference . like for the - matched case , it seemed like it was pretty good . i could do more playing with that , though . and , phd f: and see . yes . , you 're talking about for th for our features . professor e: so , i it 's not the direction that you were working with that we were saying what 's the , what 's the best you can do with mel cepstrum . but , they raised a very valid point , professor e: which , i so , to first order , you have other things you were gon na do , but to first order , i would say that the conclusion is that if you , do , some monkeying around with , the exact htk training and @ with , , how many states and , that it does n't particularly improve the performance . in other words , that even though it sounds pretty dumb , just applying the same number of states to everything , more or less , no matter what language , is n't so bad . right ? and i you had n't gotten to all the experiments you wanted to do with number of gaussians , professor e: but , let 's just if we had to if we had to draw a conclusion on the information we have so far , we 'd say something like that . professor e: , so the next question to ask , which is the one that andreas was dre addressing himself to in the lunch meeting , is , we 're not supposed to adjust the back - end , but anybody using the system would . so , if you were just adjusting the back - end , how much better would you do , in noise ? , because the language scaling and insertion penalties and are probably set to be about right for mel cepstrum . but , they 're probably not set right for these things , particularly these things that look over , larger time windows , in one way or another with lda and klt and neural nets and all these things . in the fa past we ' ve always found that we had to increase the insertion penalty to correspond to such things . that 's , @ that 's a first - order thing that we should try . phd f: so for th so the experiment is to , run our front - end like normal , with the default , insertion penalties and , and then tweak that a little bit and see how much of a difference it makes professor e: so by " our front - end " take , the aurora - two s take some version that stephane has that is , our current best version of something . professor e: , y do n't wanna do this over a hundred different things that they ' ve tried but , for some version that you say is a good one . ? how how much , does it improve if you actually adjust that ? but it is interesting . you say you have for the noisy how about for the mismatched or the medium mismatched conditions ? have you ? when you adjusted those numbers for mel cepstrum , did it ? phd f: , i do n't remember off the top of my head . i did n't even write them down . i do n't remember . i would need to , i did write down , so , when i was doing wrote down some numbers for the - matched case . looking at the i wrote down what the deletions , substitutions , and insertions were , for different numbers of states per phone . , but , that 's all i wrote down . i would need to do that . professor e: and , also , sometimes if you run behind on some of these things , maybe we can get someone else to do it and you can supervise . but but it would be it 'd be good to know that . phd f: need to get , front - end , from you or you point me to some files that you ' ve already calculated . phd f: i probably will have time to do that and time to play a little bit with the silence model . professor e: cuz , the other that , might have been part of what , the difference was at least part of it that we were seeing . remember we were seeing the sri system was so much better than the tandem system . part of it might just be that the sri system , they always adjust these things to be optimized , phd f: i wonder if there 's anything that we could do to the front - end that would affect the insertion professor e: , part of what 's going on , is the , the range of values . so , if you have something that has a much smaller range or a much larger range , and taking the appropriate root . if something is like the equivalent of a bunch of probabilities multiplied together , you can take a root of some sort . if it 's like seven probabilities together , you can take the seventh root of it , or if it 's in the log domain , divide it by seven . but , that has a similar effect because it changes the scale of the numbers of the differences between different candidates from the acoustic model phd f: so that w so , in effect , that 's changing the value of your insertion penalty . professor e: , it 's more directly like the language scaling or the , the model scaling or acoustic scaling , professor e: but that those things have a similar effect to the insertion penalty anyway . they 're a slightly different way of handling it . phd f: so if we the insertion penalty is , then we can get an idea about what range our number should be in , professor e: so that 's why that 's another reason other than curiosity as to why i it would be kinda neat to find out if we 're way off . , the other thing is , are are n't we seeing ? y y i ' m you ' ve already looked at this bu in these noisy cases , are ? we are seeing lots of insertions . the insertion number is quite high ? i know the vad takes pre care of part of that , but phd f: i ' ve seen that with the mel cepstrum . i do n't i about the aurora front - end , but phd b: it 's much more balanced with , when the front - end is more robust . i could look at it at this . professor e: i ' m it 's more balanced , but it would n't surprise me if there 's still , in the old systems we used to do , i remember numbers like insertions being half the number of deletions , as being and both numbers being tend to be on the small side comparing to , substitutions . phd f: , this the whole problem with insertions was what , we talked about when the guy from ogi came down that one time and that was when people were saying , we should have a , voice activity detector that , because all that we 're getting thr the silence that 's getting through is causing insertions . professor e: and it may be less of a critical thing . , the fact that some get by may be less of a critical thing if you , get things in the right range . so , the insertions is a symptom . it 's a symptom that there 's something , wrong with the range . but there 's , your substitutions tend to go up as . so , i that , the most obvious thing is just the insertions , @ . if you 're operating in the wrong range , that 's why just in general , if you change what these penalties and scaling factors are , you reach some point that 's a minimum . so . we do have to do over a range of different conditions , some of which are noisier than others . but , we may get a better handle on that if we see , we ca it 's if we actually could pick a more stable value for the range of these features , it , , could even though it 's true that in a real situation you can adjust the these scaling factors in the back - end , and it 's ar artificial here that we 're not adjusting those , you certainly do n't wanna be adjusting those all the time . and if you have a front - end that 's in roughly the right range i remember after we got our more or less together in the previous systems we built , that we tended to set those scaling factors at a standard level , and we would rarely adjust them again , even though you could get a for an evaluation you can get an extra point if you tweaked it a little bit . but , once we knew what rou roughly the right operating range was , it was pretty stable , and , we might just not even be in the right operating range . phd f: so , would the ? , would a good idea be to try to map it into the same range that you get in the - matched case ? so , if we computed what the range was in - matched , and then when we get our noisy conditions out we try to make it have the same range as ? professor e: no . you do n't wanna change it for different conditions . no . i what what i ' m saying phd f: , i was n't suggesting change it for different conditions . i was just saying that when we pick a range , we wanna pick a range that we map our numbers into we should probably pick it based on the range that we get in the - matched case . otherwise , what range are we gon na choose to map everything into ? professor e: . it depends how much we wanna do gamesmanship and how much we wanna do , i if he it to me , actually , even if you wanna be play on the gamesmanship side , it can be kinda tricky . so , what you would do is set the scaling factors , so that you got the best number for this point four five times the , and so on . but they might change that those weightings . sorta think we need to explore the space . just take a look at it a little bit . and we may just find that we 're way off . maybe we 're not . as for these other things , it may turn out that , it 's reasonable . but then , andreas gave a very reasonable response , and he 's probably not gon na be the only one who 's gon na say this in the future of , people within this tight - knit community who are doing this evaluation are accepting , more or less , that these are the rules . but , people outside of it who look in at the broader picture are certainly gon na say " , a minute . you 're doing all this standing on your head , on the front - end , when all you could do is just adjust this in the back - end with one s one knob . " and so we have to at least , determine that 's not true , which would be ok , or determine that it is true , in which case we want to adjust that and then continue with what we 're doing . and as you say as you point out finding ways to then compensate for that in the front - end also then becomes a priority for this particular test , and saying you do n't have to do that . what 's new with you ? professor e: you what 's old with you that has developed over the last week or two ? phd f: how about that ? any - anything new on the thing that , you were working on with the , ? phd c: , to try to found , nnn , robust feature for detect between voice and unvoice . and we w we try to use the variance of the es difference between the fft spectrum and mel filter bank spectrum . , also the another parameter is relates with the auto - correlation function . r - ze energy and the variance a also of the auto - correlation function . professor e: so , that 's . that 's what you were describing , i , a week or two ago . phd c: but we do n't have res we do n't have result of the auro for aurora yet . we need to train the neural network and phd c: , we work in the report , too , because we have a lot of result , they are very dispersed , and was necessary to look in all the directory to give some more structure . professor e: so . b so i if summarize , what 's going on is that you 're going over a lot of material that you have generated in furious fashion , f generating many results and doing many experiments and trying to pull it together into some coherent form to be able to see wha see what happens . phd f: is this a report that 's for aurora ? or is it just like a tech report for icsi , professor e: so , my suggestion , though , is that you not necessarily finish that . but that you put it all together so that it 's you ' ve got a clearer structure to it . what things are , you have things documented , you ' ve looked things up that you needed to look up . so that , so that such a thing can be written . when when do you leave again ? professor e: first of july ? and that you figure on actually finishing it in june . because , you 're gon na have another bunch of results to fit in there anyway . professor e: so so , it 's good to pause , and to gather everything together and make it 's in good shape , so that other people can get access to it and so that it can go into a report in june . but to really work on fine - tuning the report n at this point is probably bad timing , . phd b: , we did n't we just planned to work on it one week on this report , not no more , anyway . professor e: but you ma you may really wanna add other things later anyway because you there 's more to go ? phd f: are you discovering anything , that makes you scratch your head as you write this report , like why did we do that , or why did n't we do this , phd b: actually , there were some tables that were also with partial results . we just noticed that , wh while gathering the result that for some conditions we did n't have everything . but anyway . , . we have , extracted actually the noises from the speechdat - car . and so , we can train neural network with speech and these noises . it 's difficult to say what it will give , because when we look at the aurora the ti - digits experiments , they have these three conditions that have different noises , and this system perform as on the seen noises on the unseen noises and on the seen noises . but , this is something we have to try anyway . so adding the noises from the speechdat - car . phd b: ogi does did that . at some point they did that for the voice activity detector . phd b: they used some parts of the , italian database to train the voice activity detector , . it professor e: . i that 's a matter of interpretation . the rules as i understand it , is that in principle the italian and the spanish and the english no , italian and the finnish and the english ? were development data professor e: on which you could adjust things . and the and the german and danish were the evaluation data . and then when they finally actually evaluated things they used everything . professor e: and it is true that the performance , on the german was , even though the improvement was n't so good , the pre the raw performance was really pretty good . and , it does n't appear that there 's strong evidence that even though things were somewhat tuned on those three or four languages , that going to a different language really hurt you . and the noises were not exactly the same . because it was taken from a different , they were different drives . professor e: , it was actual different cars and so on . , it 's somewhat tuned . it 's tuned more than , a you 'd really like to have something that needed no particular noise , maybe just some white noise like that a at most . but that 's not really what this contest is . , i it 's ok . that 's something i 'd like to understand before we actually use something from it , phd f: it 's probably something that , mmm , the , experiment designers did n't really think about , because most people are n't doing trained systems , or , , systems that are like ours , where you actually use the data to build models . , they just doing signal - processing . professor e: , it 's true , except that , that 's what we used in aurora one , and then they designed the things for aurora - two knowing that we were doing that . phd f: that 's true . and they did n't forbid us right ? to build models on the data ? professor e: but , i think that it probably would be the case that if , say , we trained on italian , data and then , we tested on danish data and it did terribly , that it would look bad . and someone would notice and would say " , look . this is not generalizing . " i would hope tha i would hope they would . but , it 's true . , maybe there 's parameters that other people have used , th that they have tuned in some way for other things . so it 's , we should we should maybe that 's maybe a topic especially if you talk with him when i ' m not here , that 's a topic you should discuss with hynek to , double check it 's ok . phd f: , i was thinking about things like , gender - specific nets and , vocal tract length normalization . things like that . i d i do n't i did n't information we have about the speakers that we could try to take advantage of . professor e: , again , i if you had the whole system you were optimizing , that would be easy to see . but if you 're supposedly just using a fixed back - end and you 're just coming up with a feature vector , w i ' m not , having the two nets suppose you detected that it was male , it was female you come up with different professor e: , it 's an interesting thought . maybe having something along the , you ca n't really do vocal tract normalization . but something that had some of that effect being applied to the data in some way . phd f: no . i had n't thought it was thought too much about it , really . it just something that popped into my head just now . and so i , you could maybe use the ideas a similar idea to what they do in vocal tract length normalization . , you have some a , general speech model , maybe just a mixture of gaussians that you evaluate every utterance against , and then you see where each , utterance like , the likelihood of each utterance . you divide the range of the likelihoods up into discrete bins and then each bin 's got some knob , setting . professor e: . but just listen to yourself . , that really does n't sound like a real - time thing with less than two hundred milliseconds , latency that and where you 're not adjusting the statistical engine . , that just professor e: not just expensive . i do n't see how you could possibly do it . you ca n't look at the whole utterance and do anything . , you can only each frame comes in and it 's got ta go out the other end . phd f: so whatever it was , it would have to be on a per frame basis . professor e: , you can do , fairly quickly you can do male female f male female . but as far as , like bbn did a thing with , vocal tract normalization a ways back . maybe other people did too . with with , l trying to identify third formant average third formant using that as an indicator of , third formant i if you imagine that to first order what happens with , changing vocal tract is that , the formants get moved out by some proportion so , if you had a first formant that was one hundred hertz before , if the fifty if the vocal tract is fifty percent shorter , then it would be out at seven fifty hertz , and so on . so , that 's a move of two hundred fifty hertz . whereas the third formant which might have started off at twenty - five hundred hertz , might be out to thirty - seven fifty , so it 's at although , you frequently get less distinct higher formants , it 's still third formant 's a reasonable compromise , so , , if i recall correctly , they did something like that . and and but , that does n't work for just having one frame . ? that 's more like looking at third formant over a turn like that , but on the other hand , male female is a much simpler categorization than figuring out a factor to , squish or expand the spectrum . so , . y you could imagine that , just like we 're saying voiced - unvoiced is good to know , male female is good to know also . but , you 'd have to figure out a way to , incorporate it on the fly . , i , as you say , one thing you could do is simply , have the male and female output vectors , tr nets trained only on males and n trained only on females or , i if that would really help , because you already have males and females and it 's - putting into one net . is it ? phd b: . so , this noise , . the msg there is something perhaps , i could spend some days to look at this thing , cuz it seems that when we train networks on let 's say , on timit with msg features , they look as good as networks trained on plp . when they are used on the speechdat - car data , it 's not the case the msg features are much worse , and so maybe they 're , less more sensitive to different recording conditions , or shou professor e: should n't be . they should be less so . r right ? wh - ? but let me ask you this . what what 's the , ? do you kno recall if the insertions were higher with msg ? phd b: i . i can not tell . it 's it the error rate is higher . so , i don professor e: . but you should always look at insertions , deletions , and substitutions . so , msg is very , very dif , plp is very much like mel cepstrum . msg is very different from both of them . so , if it 's very different , then this is the thing i ' m really glad andreas brought this point up . i had forgotten to discuss it . you always have to look at how this , these adjustments , affect things . and even though we 're not allowed to do that , again we maybe could reflect that back to our use of the features . so if it if , the problem might be that the range of the msg features is quite different than the range of the plp or mel cepstrum . and you might wanna change that . phd b: but , it 's d it 's after , it 's tandem features , so we we have estimation of post posteriors with plp and with msg as input , so i don professor e: that means they 're between zero and one . but i it does n't necessarily , they could be , do - does n't tell you what the variance of the things is . cuz if you 're taking the log of these things , it could be , knowing what the sum of the probabilities are , does n't tell you what the sum of the logs are . phd b: so we should look at the likelihood , or what ? or at the log , perhaps , professor e: or what , what you 're the thing you 're actually looking at . so your the values that are actually being fed into htk . what do they look like ? phd f: no and so th the , for the tandem system , the values that come out of the net do n't go through the sigmoid . right ? they 're the pre - nonlinearity values ? professor e: , almost . but then you actually do a klt on them . they are n't normalized after that , are they ? professor e: so the question is . whatever they are at that point , are they something for which taking a square root or cube root or fourth root like that is gon na be a good or a bad thing ? , and that 's something that nothing else after that is gon na , things are gon na scale it , subtract things from it , scale it from it , but nothing will have that same effect . anyway , phd f: cuz if the log probs that are coming out of the msg are really big , the standard insertion penalty is gon na have very little effect professor e: no . again you do n't really look at that . it 's something that , and then it 's going through this transformation that 's probably pretty close to it 's , whatever the klt is doing . but it 's probably pretty close to what a discrete cosine transformation is doing . but still it 's not gon na probably radically change the scale of things . i would think . it may be entirely off and it may be at the very least it may be quite different for msg than it is for mel cepstrum or plp . so that would be so the first thing i 'd look at without adjusting anything would just be to go back to the experiment and look at the , substitutions , insertions , and deletions . and if the , i if there 's a fairly large effect of the difference , say , the r ratio between insertions and deletions for the two cases then that would be , an indicator that it might be in that direction . phd b: my point was more that it works sometimes but sometimes it does n't work . so . and it works on ti - digits and on speechdat - car it does n't work , professor e: but , some problems are harder than others , and , sometimes , there 's enough evidence for something to work and then it 's harder , it breaks . so it 's but it but , i it could be that when you say it works maybe we could be doing much better , even in ti - digits . phd b: there is also the spectral subtraction , which , maybe we should , try to integrate it in our system . phd b: that would involve to mmm use a big a al already a big bunch of the system of ericsson . because he has spectral subtraction , then it 's followed by , other processing that 's are dependent on the , if it 's speech or noi or silence . and there is this spectral flattening after if it 's silence , and s it 's important , to reduce this musical noise and this increase of variance during silence portions . this was in this would involve to take almost everything from the this proposal and then just add some on - line normalization in the neural network . professor e: , this 'll be , something for discussion with hynek next week . how are , how are things going with what you 're doing ? grad d: , i took a lot of time just getting my taxes out of the way multi - national taxes . so , i ' m starting to write code now for my work but i do n't have any results yet . , i it would be good for me to talk to hynek , when he 's here . do what his schedule will be like ? professor e: , we 'll have a lot of time . i 'll , he 's he 'll be talking with everybody in this room professor e: not thursday and friday . cuz i will be at faculty retreat . i 'll try to connect with him and people as on wednesday . , how 'd taxes go ? taxes go ok ? professor e: . , good . that 's just that 's one of the big advantages of not making much money is the taxes are easier . grad d: ye , i 'll still have a bit of canadian income but it 'll be less complicated because i will not be a considered a resident of canada anymore , so i wo n't have to declare my american income on my canadian return . grad a: , right . , continuing looking at , ph , phonetic events , and , this tuesday gon na be , meeting with john ohala with chuck to talk some more about these , ph , phonetic events . , came up with , a plan of attack , gon na execute , and it 's that 's it . grad a: i was hoping i could wave my hands . so , once wa i was thinking getting us a set of acoustic events to , to be able to distinguish between , phones and words and . and , once we would figure out a set of these events that can be , , hand - labeled or derived , from h the hand - labeled phone targets . , we could take these events and , do some cheating experiments , where we feed , these events into an sri system , , and evaluate its performance on a switchboard task . , . grad a: , give you an example of twenty - odd events . so , he in this paper , it 's talking about phoneme recognition using acoustic events . so , things like frication or , nasality . phd f: . the just to expand a little bit on the idea of acoustic event . there 's , in my mind , anyways , there 's a difference between , acoustic features and acoustic events . and of acoustic features as being , things that linguists talk about , like , phd f: that 's not based on , acoustic data . so they talk about features for phones , like , its height , its tenseness , laxness , things like that , which may or may not be all that easy to measure in the acoustic signal . versus an acoustic event , which is just { nonvocalsound } some { nonvocalsound } something in the acoustic signal { nonvocalsound } that is fairly easy to measure . so it 's , it 's a little different , in at least in my mind . professor e: , when we did the spam work , there we had this notion of an , auditory @ auditory event . professor e: called them " avents " , , with an a at the front . and the idea was something that occurred that is important to a bunch of neurons somewhere . a sudden change or a relatively rapid change in some spectral characteristic will do this . , there 's certainly a bunch of places where that neurons are gon na fire because something novel has happened . that was that was the main thing that we were focusing on there . but there 's certainly other things beyond what we talked about there that are n't just rapid changes , phd f: it 's kinda like the difference between top - down and bottom - up . of the acoustic , phonetic features as being top - down . , you look at the phone and you say this phone is supposed to be , have this feature , and this feature . whether tha those features show up in the acoustic signal is irrelevant . whereas , an acoustic event goes the other way . here 's the signal . here 's some event . what ? and then that , that may map to this phone sometimes , and sometimes it may not . it just depen maybe depends on the context , things like that . and so it 's a different way of looking . grad a: so . using these events , , we can perform these , cheating experiments . see how good they are , in terms of phoneme recognition or word recognition . and then from that point on , i would , s design robust event detectors , in a similar , wa spirit that saul has done w , with his graphical models , and this probabilistic and - or model that he uses . , try to extend it to , to account for other phenomena like , cmr co - modulation release . and maybe also investigate ways to modify the structure of these models , in a data - driven way , similar to the way that , jeff , bilmes did his work . and while i ' m doing these , event detectors , ma mea measure my progress by comparing , the error rates in clean and noisy conditions to something like , neural nets . so so , once we have these , event detectors , we could put them together and feed the outputs of the event detectors into the sri , hmm system , and test it on switchboard or , maybe even aurora . that 's the big picture of , the plan . professor e: there 's , a couple people who are gon na be here i forget if i already told you this , but , a couple people who are gon na be here for six months . , there 's a professor kollmeier , from germany who 's , quite big in the , hearing - aid signal - processing area and , michael kleinschmidt , who 's worked with him , who also looks at auditory properties inspired by various , brain function things . , they 'll be interesting to talk to , in this issue as these detectors are , developing . so , he looks at interesting things in the different ways of looking at spectra in order to get various speech properties out . short meeting , but that 's ok . we might as do our digits . and like i say , i encourage you to go ahead and meet , next week with , hynek . alright , i 'll start . it 's , one thirty - five . seventeen ok ###summary: although the members of icsi's meeting recorder group at berkeley had little progress to report , there were still a number of issues relating to their work to discuss. these included making plans for upcoming experiments , clarifying definitions , and approaches which may or may not be against the rules of the aurora project , alongside alternatives that would not be. there was also debate about the necessary continuation of a group report. plans were also made with regard to a visitor from research partner ogi for next weeks meeting , speaker me018 will provide numbers on his experiments into adjusting insertion penalties. speaker me013 feels that mn007 and fn002 should just put together the coherent bare-bones of their report , and move back to experimenting , leaving the report for the end of the project. he also feels that they should discuss some aspects of future work , for clarity's sake , with the visitor from ogi. the main project the group is working on , aurora , has a number of rules attaches as to what developers can and cannot play with , but this needs to be clarified. the rules are adhered to in the small community , but make no sense from a broader research perspective. while writing their report , mn007 and fn002 have noticed some tables contain only partial results , and there are things they do not recall the reasoning behind. speaker me018 has done a little initial research into the next area he is to look at , adjusting the scaling and the insertion penalties. playing with the latter made little difference , though was coming from a different feature set. speakers mn007 and fn002 have been working on their report , logically writing up everything they have done so far. mn007 has also been working with a new dataset , preparing it for use as a more realistic source of noise , though it is not clear if this is allowed. speaker me006 has continued to look at phonetic events , and has come up with a plan for his future work.
3
professor c: alright . good . ok so , let 's get started . nancy said she 's coming and that means she will be . my suggestion is that robert and johno give us a report on last week 's adventures to start . so everybody knows there were these guys f from heidelber - , actually from dfki , part of the german smartkom project , who were here for the week and , got a lot done . grad e: , so too . the we got to the point where we can now speak into the smartkom system , and it 'll go all the way through and then say something like " roman numeral one , am smarticus . " it actually says , " roemisch einz , am smarticus , " grad e: which means it 's just using a german sythesis module for english sentences . so , grad e: the sythesis is just a question of , hopefully it 's just a question of exchanging a couple of files , once we have them . and , it 's not going to be a problem because we decided to stick to the so - called concept to speech approach . so i ' m going backwards now , so " synthesis " is where you make this , make these sounds , and " concept to speech " is feeding into this synthesis module giving it what needs to be said , and the whole syntactic structure so it can pronounce things better , presumably . then , just with text to speech . grad e: and , johno learned how to write xml tags . , and did write the tree adjoining grammar for some sentences . no , right ? , for a couple grad d: so . bu - , i the way the , the dialogue manager works is it dumps out what it wants to know , or what it wants to tell the person , to a er in xml and there 's a conversion system for different , to go from xml to something else . and th so , the knowledge base for the system , that generates the syntasti syntactic structures for the ge generation is , in a lisp - like the knowledge base is in a lisp - like form . and then the thing that actually builds these syntactic structures is something based on prolog . so , you have a , a goal and it , says " ok , i ' m gon na try to do the greet - the - person goal , so it just starts , it binds some variables and it just decides to , do some subscold . , it just means " build the tree . " and then it passes the tree onto , the ge the generation module . grad e: but that out of the twelve possible utterances that the german system can do , we ' ve already written the syntax trees for three or four . grad d: we so , the syntax trees are very simple . it 's like most of the sentences in one tree , and instead of , breaking down to , like , small units and building back up , they took the sentences , and cut them in half , or , into thirds like that , and made trees out of those . and so , tilman wrote a little tool that you could take lisp notation and generate an xml , tree . , s what do ca structure from the lisp . and so you just say , " noun goes to " , er , nah , i do n't re i ' ve never been good at those . so there 's like the vp goes to n and those things in lisp , and it will generate for you . grad e: and because we 're sticking to that structure , the synthesis module does n't need to be changed . so all that f fancy , and the texas speech version of it , which is actually the simpler version , is gon na be done in october which is much too late for us . so . this way we worked around that . the , the system , show you the system . i actually want , at least , maybe , you should be able to start it on your own . if you wanna play around with it , in th in the future . right now it 's brittle and you need to ch start it up and then make ts twenty changes on seventeen modules before they actually can stomach it , anything . and send in a couple of side queries on some dummy center set - up program so that it actually works because it 's designed for this seevit thing , where you have the gestural recognition running with this s siemens virtual touch screen , which we do n't have here . and so we 're doing it via mouse , but the whole system was designed to work with this thing and it was it was a lot of engineering . no science in there whatsoever , but it 's working now , and , that 's the good news . so everything else actually did prove to be language independent except for the parsing and the generation . grad d: why i had i did need to chan generate different trees than the german ones , mainly because like , the gerund in german is automatically taken care of with just a regular verb , grad d: so i 'd have to add " am walking , " or i 'd have to add a little stem for the " am " , when i build the built the tree . grad b: , i noticed that , that some of the examples they had , non - english word orders and so on , . and then all that good . so . like . grad b: we say , i still do n't exactly understand the information flow in this thing , or what the modules are and so on . so , like just that such - and - such module decides that it wants to achieve the goal of greeting the user , and then magically it s , how does it know which syntactic structure to pull out , and all that ? professor c: but when you get free and you have the time either robert or johno or walk you through it . professor c: it 's eee messy but once you understand it . it 's it 's there 's nothing really complicated about it . grad b: and i remember one thing that came up in the talk last wednesday . , was this , he talked about the idea of like , he was talking about these lexicalized , tree adjoining grammars where you for each word you , grad b: for each lexical item , the lexical entry says what all the trees are that it can appear in . and , that 's not v that 's the opposite of constructional . that 's , that 's hpsg or whatever . professor c: now , we 're not committed for our research to do any of those things . so we are committed for our funding . professor c: , to n no , to just get the dem get the demos they need . so between us all we have t to get th the demos they need . if it turns out we can also give them lots more than that by , tapping into other things we do , that 's great . professor c: and , s deliberately . so , the reason i 'd like you to understand what 's going on in this demo system is not because it 's important to the research . it 's just for closure . so that if we come up with a question of " could we fit this deeper in there ? " . what the hell we 're talking about fitting in . so it 's just , in the sam same actually with the rest of us we just need to really understand what 's there . is there anything we can make use of ? , is there anything we can give back , beyond th the minimum requirements ? but none of that has a short time fuse . so th the demo requirements for this fall are taken care of as of later this week . and then so , it 's probably fifteen months until there 's another serious demo requirement . that does n't mean we do n't think about it for fifteen months , but it means we can not think about it for six months . professor c: so . the plan for this summer , really is to step back from the applied project , professor c: keep the d keep the context open , but actually go after the basic issues . and , so the idea is there 's this , other subgroup that 's worrying about formalizing the nota getting a notation . but in parallel with that , the hope is tha in particularly you will work on constructions in english ge - and german for this domain , but y not worry about parsing them or fitting them into smartkom or any of the other anything lik any other constraints for the time being . professor c: it 's hard enough to get it semantically and syntactically right and then and get the constructions in their form and . and , i don i do n't want you f feeling that you have to somehow meet all these other constraints . professor c: . and similarly with the parsing , we 're gon na worry about parsing , the general case , construction parser for general constructions . and , if we need a cut - down version for something , or whatever , we 'll worry about that later . so i 'd like to , for the summer turn into science mode . and i assume that 's also , your plan as . grad b: so , that like the meetings so far that i ' ve been at have been geared towards this demo , and then that 's going to go away pretty soon . grad e: it 's got . what i what is a good idea that show to anyone who 's interested , we can even make a an internal demo , and i show you what i do , i speak into it and you hear it talk , and walk f through the information . so , this is like in half hour or forty - five minutes . just fun . and so you when somebody on the streets com comes up to you and asks you what is smartkom so you can , give a sensible answer . professor c: so , c sh we could set that up as actually an institute wide thing ? just give a talk in the big room , and so peo people 's going on ? when you 're ready ? , that 's the thing that 's the level at which we can just li invite everybody and say " this is a project that we ' ve been working on and here 's a demo version of it " and like that . grad e: ok . d we do wanna have all the bugs out b where you have to pipe in extra xml messages from left and right before you 're makes sense . professor c: but any so that e it 's clear , then , . actually , roughly starting let 's say , nex next meeting , cuz this meeting we have one other thing to tie up besides the trip report . but starting next meeting we want to flip into this mode where . there are a lot of issues , what 's the ontology look like , what do the constructions look like , what 's the execution engine look like , mmm lots of things . but , more focused on an idealized version than just getting the demo out . now before we do that , let 's get back in ! but , it 's still , useful for you to understand the demo version enough , so that you can see what it is that it might eventually get retro - fitted into . professor c: and johno 's already done that , looked at the dem the looked at the smartkom . professor c: , the parser , and that . ok . anyway . so , the trip the report on these the last we interrupted you guys telling us about what happened last week . grad e: and if you just take the and i g i got the feeling that we are the only ones right now who have a running system . i what the guys in kaiserslautern have running because e the version that is , the full version that 's on the server d does not work . and you need to do a lot of to make it work . and so it 's and even tilman and ralf said " there never was a really working version that did it without th all the shortcuts that they built in for the october @ version " . so we 're actually maybe ahead of the system gruppe by now , the system the integration group . and it was , it was fun to some extent , but the outcome that is scientific interest is that both ralf and tilman , i know that they enjoyed it here , and they r they liked , a lot of the they saw here , what we have been thinking about , and they 're more than willing to , cooperate , by all means . and , part of my responsibility is to use our internal " group - ware " server at eml , make that open to all of us and them , so that whatever we discuss in terms of parsing and generating and constructions w we put it in there and they put what they do in there and maybe we can even , get some overlap , get some synergy out of that . and , the , if i find someone at in eml that is interested in that , i may even think that we could look take constructions and generate from them because the tree adjoining grammars that tilman is using is as you said nothing but a mathematical formalism . and you can just do anything with it , whether it 's syntactic trees , h p s g - like , or whether it 's construction . so if you ever get to the generation side of constructing things and there might be something of interest there , but in the moment we 're definitely focused on the understanding , pipeline . professor c: anyth - any other repo visit reports stories ? we so we now know , what the landscape is like . and so we just push on and , do what we need to do . and one of the things we need to do is the , and this is relatively tight tightly constrained , is to finish up this belief - net . so . and i was going to switch to start talking about that unless there 're m other more general questions . ok so here 's where we are on the belief - net as far as i understand it . going back i two weeks ago robert had laid out this belief - net , missing only the connections . right ? that is so , he 'd put all th all the dots down , and we went through this , and , more or less convinced ourselves that at least the vast majority of the nodes that we needed for the demo level we were thinking of , were in there . we may run across one or two more . but the connections were n't . so , bhaskara and i went off and looked at some technical questions about were certain operations legitimate belief - net computations and was there some known problem with them or had someone already , solved how to do this and . and so bhaskara tracked that down . the answer seems to be , " no , no one has done it , but yes it 's a perfectly reasonable thing to do if that 's what you set out to do " . and , so the current state of things is that , again , starting now , we 'd like to actually get a running belief - net for this particular subdomain done in the next few weeks . so bhaskara is switching projects as of the first of june , and , he 's gon na leave us an inheritance , which is a hopefully a belief - net that does these things . and there 're two aspects to it , one of which is , technical , getting the coding right , and making it run , and like that . and the other is the actual semantics . ok ? what all , what are the considerations and how and what are the ways in which they relate . so he doe h he does n't need help from this group on the technical aspects or if he does we 'll do that separately . but in terms of what are the decisions and like that , that 's something that we all have to work out . is is that right ? that 's both you guys ' understanding of where we are ? ok . grad g: so , i , is there like a latest version of the belief - net of the proposed belief - net ? like grad e: . , no , we did n't decide . we wanted to look into maybe getting it , the visualization , a bit clearer , but if we do it , a paper version of all the nodes and then the connections between them , that should suffice . professor c: we do in the long run wanna do better visualization and all that . that 's separable , grad d: i did look into that , in terms of , exploding the nodes out and down ag javabayes does not support that . imagine a way of hacking at the code to do that . it 'd probably take two weeks or so to actually go through and do it , grad d: and i went through all the other packages on murph - kevin murphy 's page , and i could n't find the necessary mix of free and with the gui and , with this thing that we want . professor c: , we can p if it 's if we can pay . if it 's paying a thousand dollars we can do that . so so do n't view free as a absolute constraint . grad d: ok . ok , so then i 'll go back and look at the ones on the list that grad g: how exp i do n't think it 's is it free ? because i ' ve seen it advertised in places so i it seems to professor c: , it may be free to academics . like i . i have a co i have a copy that i l i downloaded . professor c: so , at one point it was free . but yo i noticed people do use hugin so , professor c: and bhaskara can give you a pointer . so then , in any case , but paying a lit , if i if it 's probably for university , it 's gon na be real cheap anyway . but , if it 's fifty thousand dollars we are n't gon na do it . i ' m mean , we have no need for that . grad e: i also s would suggest not to d spend two weeks in changing the javabayes code . grad e: i will send you a pointer to a java applet that does that , it 's a fish - eye . you you have a node , and you click on it , and it shows you all the connections , and then if you click on something else that moves away , that goes into the middle . and maybe there is an easy way of interfacing those two . if that does n't work , it 's not a problem we need to solve right now . what i ' m what my job is , i will , give you the input in terms of the internal structure . maybe node by node , like this ? or should i collect it all and grad g: , just any like rough representation of the entire belief - net is probably best . grad e: and you 're gon na be around ? t again , always tuesdays and thursdays afternoon - ish ? as usual ? or will that change ? grad g: , like i c . this week i , i have a lot of projects and but after that i will generally be more free . so yes , i might be around . and g , generally if you email me also be around on other days . professor c: and this is not a crisis that , you do , e everybody who 's a student should , do their work , get their c courses all in good shape and then we 'll dig d dig down on this . grad e: , that 's no , that 's good . that means i have i h spend this week doing it . grad b: how do you go about this process of deciding what these connections are ? i know that there 's an issue of how to weight the different things too , and . right ? do you just and see if it professor c: there there 're two different things you do . one is you design and the other is you learn . so what we 're gon na do initially is do design , and , i if you will , . that is use your best knowledge of the domain to , hypothesize what the dependencies are and . if it 's done right , and if you have data then , there are techniques for learning the numbers given the structure and there are even techniques for learning the structure , although that takes a lot more data , and it 's not as @ and so on . so but for the limited amount of we have for this particular exercise we 'll just design it . grad e: fo - hopefully as time passes we 'll get more and more data from heidelberg and from people actually using it and . so but this is the long run . but to solve our problems ag a mediocre design will do in the beginning . grad b: that 's right . , and , speaking of data , are there i could swore , i could swear i saw it sitting on someone 's desk at some point , but is there a transcript of any of the , initial interactions of people with the system ? cuz , i ' m still itching to look at what look at the , and see what people are saying . professor c: - . make yourself a note . so and , keith would like the german as the english , so whatever you guys can get . grad b: u ok . , i found the , the audio of some of those , and , it sounded like i did n't want to trudge through that , . it was just strange , but . grad e: we probably will not get those to describe because they were trial runs . , but that 's th but we have data in english and german already . professor c: ok , so while we 're still at this top level , anything else that we oughta talk about today ? grad b: , wanted to , s like mention as an issue , last meeting i was n't here because i went to a linguistics colloquium on the fictive motion , grad b: and that was pretty interesting and , seems to me that will fairly be of relevance to what we 're doing here because people are likely to give descriptions like , " what 's that thing right where you start to go up the hill , " like that , meaning a few feet up the hill or whatever from some reference point and all that so , i ' m in terms of , people trying to state locations or , all that , this is gon na be very relevant . so , now that was the talk was about english versus japanese , which the japanese does n't affect us directly , except that , some of the construction he 'd what he talked about was that in english we say things like th , " your bike is parked across the street " and we use these prepositional phrases , " , if you were to move across the street you would be at the bike " , but in japanese the more conventionalized tendency is to use a description of " where one has crossed to the river , there is a tree " . , and , you can actually say things like , " there 's a tree where one has crossed the river , but no one has ever crossed the river " , like that . so the idea is that this really is that 's supposed show that 's it 's really fictive and so on . but that construction is also used in english , like " right where you start to go up the hill " , or " just when you get off the train " , like that to , to indicate where something is . grad e: , the deep map project which is undergoing some renovation at the moment , but this is a three language project : german , english , japanese . and , we have a , i have taken care that we have the japanese generation and . and so i looked into spatial description . so we can generate spatial descriptions , how to get from a to b . and and information on objects , in german , english , and japanese . and there is a huge project on spatial descriptions differences in spatial descriptions . , if yo if you 're interested in that , so how , it does go d all the way down to the conceptual level to some extent . grad e: but , we should leave japanese constructions maybe outside of the scope for now , but definitely it 's interesting to look at cross the bordered there . phd a: are are you going to p pay any attention to the relative position of the direction relative to the speaker ? , there are some differences between hebrew and english . we can say " park in front of the car " as you come beh you drive behind the car . in hebrew it means " park behind the car " , because to follow the car is defined as it faces you . phd a: while in english , front of the car is the absolute front of the car . so . phd a: so , i is german closer to e , to e i do n't think it 's related to syntax , though , so it may be entirely different . grad e: . did you ever get to look at the rou paper that i sent you on the on that problem in english and german ? carroll , ninety - three . i there is a study on the differences between english and german on exactly that problem . so it 's they actually say " the monkey in front of the car , where 's the monkey ? " and , they found statistically very significant differences in english and german , so i it might be , since there are only a finite number of ways of doing it , that german might be more like hebrew in that respect . the solution they proposed was that it was due to syntactic factors . grad e: that syntactic facto factors do play a role there , wh whether you 're more likely , to develop , choices that lead you towards using intrinsic versus extrinsic reference frames . grad b: , it seems to me that you can get both in english depending o , like , " in front of the car " could like , here 's the car sideways to me in between me and the car 's in front of the car , or whatever . i could see that , but but anyway , so , this was a very good talk on those kinds of issues and so on . grad e: also give you , a pointer to a paper of mine which is the ultimate taxonomy of reference frames . grad e: i ' m the only person in the world who actually knows how it works . not really . grad e: it 's called a it 's it 's spatial reference frames . you actually have only . if you wanna have a this is usually i should there should be an " l " , though . actually you have only have two choices . you can either do a two - point or a three - point which is you you 're familiar with th with the " origo " ? where that 's the center " origo " is the center of the f frame of reference . and then you have the reference object and the object to be localized . ok ? in some cases the origo is the same as the reference object . grad e: origo is a terminus technikus . in that sense , that 's even used in the english literature . origo . grad e: and , so , this video tape is in front of me . i ' m the origo and i ' m also the reference object . those are two - point . and three - point relations is if something has an intrinsic front side like this chair then your f shoe is behind the chair . and , reference object and . no , from my point of view your shoe is left of the chair . grad b: you you can actually say things like , " it 's behind the tree from me " like that , in certain circumstances in english , right ? as " from where i ' m standing it would appear that " grad e: and then and then here you on this scale , you have it either be ego or allocentric . and that 's it . so . egocentric two - point , egocentric three - point , or you can have allocentric . so , " as seen from the church , the town hall is right of that , fire station " . aa - it 's hardly ever used but it 's w professor c: . , why do n't you just put it on the web page ? there 's this edu grad e: it 's also all on my home page at eml . it 's called " an anatomy of a spatial description " . professor c: maybe just put a link on . , there something that i did n't know until about a week ago or so , is , there are separate brain areas for things within reach , and things that are out of reach . so there 's all this linguistic about , near and far , or yon and . so this is all this is there 's this linguistic facts . but , the . here 's the way the findings go . that , they do mri , and if you 're got something within reach then there 's one of your areas lights up , and if something 's out of reach a different one . but here 's the amazing result , they say . you get someone with a deficit so that they have a perfectly normal ability at distance things . so the s typical task is subdivision . so there 's a line on the wall over there , and you give them a laser pointer , and you say , " where 's the midpoint ? " and they do fine . if you give them the line , and they have to touch it , they ca n't . there 's just that part of the brain is n't functioning , so they ca n't do that . here 's the real experiment . the same thing on the wall , you give them a laser , " where is it ? " , they do it . give them a stick , long stick , and say " do it " , they ca n't do it . so there 's a remapping of distant space into nearby space . professor c: and so this doe this is , first of all , it explains something that i ' ve always wondered about and i 'll do this test on you guys as . how - i have had an experience , not often , but a certain number of times , when , i ' m working with a tool , a screwdriver , for a long time , i start feeling the tip directly . not indirectly , but you actually can feel the tip . and people who are accomplished violinists and like that , claim they also have this thing where you get a direct sensation of , physical sensation , of the end affector . phd a: the so . i it 's not exactly the th same thing , but s it 's getting close to that . professor c: i it feels like your as if your neurons had extended themselves out to this tool , and you 're feeling forces on it and you deal directly with it . phd a: i once i was playing with those devices that allow you to manipulate objects when it 's dangerous to get close ? so you can insert your hand something phd a: and there 's a correspondence between so i played with it . after a while , you do n't feel the difference anymore . it 's phd a: very you stop back and suddenly it goes away and you have to work again to recapture it , but . professor c: right , so anyway , so so this was the first actual experimental evidence i 'd seen that was consistent with this anecdotal . and it makes a lovely def story about why languages , make this distinction . there are behavioral differences too . things you can reach are really quite different than things you ca n't . but there seems to be an actu really deep embodied neural difference . and i this is , so . in addition to the e professor c: exactly . so in addition to e ego and allocentric which appear all over the place , you also have this proximal - distal thing which is very deeply embedded . s grad e: , dan montello , he does the th the cognitive map world , down in santa barbara . and he always talks about these he he already i probably most likely without knowing this evidence is talking about these small scale spaces that you can manipulate versus large scale environmental spaces . professor c: there 's been a lot of behavioral things o on this , but that was the first neur neuro - physiological thing i saw . anyway , so we 'll look at this . and . so , all of these issues now are now starting to come up . so , now we 're now done with demos . we 're starting to do science , and so these issues about , reference , and spatial reference , discourse reference , - all this , deixis which is part of what you were talking about , so , all of this is coming up essentially starting now . so we got ta do all this . so there 's that . and then there 's also a set of system things that come up . so " ok , we 're not using their system . that means we need our system . " it it follows . and so , in addition to the business about just getting the linguistics right , and the formalism and , we 're actually gon na build something and , johno is point person on the parser , analyzer , whatever that is , and we 're gon na start on that in parallel with the , the grammar . but to do that we 're gon na need to make some decisions like ontology , so , and so this is another thing where we 're gon na , have to get involved and make s relatively early , make some decisions on , " is there an ontology api that " there 's a standard way of getting things from ontologies and we build the parser and around that , or is there a particular ontology that we 're gon na standardize on , and if so , is there something that we can use there . i does either the smartkom project or one of the projects at eml have something that we can just p pull out , for that . , so there are gon na be some things like that , which are not science but system . but we are n't gon na ignore those cuz we 're not only going the plan is not only to lay out this thing , but to actually build some of it . and how much we build , and . professor c: . part of it , if it works right , is wh it looks like we 're now in a position that the construction analyzer that we want for this applied project can be the same as the construction analyzer that nancy needs for the child language modeling . so . it 's always been out of phase but it now seems that , there 's a good shot at that . so we ' ve talked about it , and the hope is that we can make these things the same thing , and it 's only w in both cases it 's only one piece of a bigger system . but it would be if that piece were exactly the same piece . it was just this construction analyzer . and so we think we have a shot at that . so . the for to to come full circle on that , this formalization task , is trying to get the formalism into a shape where it can actually professor c: d , where it actually is covers the whole range of things . and the thing that got mark into the worst trouble is he had a very ambitious thing he was trying to do , and he insisted on trying to do it with a limited set of mechanisms . it turned out , inherently not to cover the space . and it just it was just terribly frustrating for him , and he seemed fully committed to both sides of this i irreconcilable thing . professor c: so there 's , deep , really deep , emotional commitment to a certain theory being , complete . professor c: we - it has n't it certainly has n't been observed , in any case . grad f: i have a problem , then . it 's so . whether i do depends on whether i ' m talking to him or him probably . professor c: why a actually , , you do but , th the thing you have to i m implement is so small that . grad f: which meeting i ' m in . it 's ok to be purist within that context . professor c: but to try to do something upscale and purist particularly if what you 're purist about does n't actually work , is real hard . professor c: and then the other thing is while we 're doing this robert 's gon na pick a piece of this space , professor c: , for his absentee thesis . you all know that you can just , in germany almost just send in your thesis . grad e: the - th there there 's a drive - in thesis sh joint over in saarbruecken . professor c: it costs a lot . the the amount you put in your credit card and as . but , but anyway , so , that 's , also got ta be worked out , hopefully over the next few weeks , so that it becomes clear , what piece , robert wants to jump into . and , while we 're at this level , there 's at least one new doctoral student in computer science who will be joining the project , either next week or the first of august , depending on the blandishments of microsoft . so , de . and her name is eva . it really is . nobody believed th that grad f: , it had to be a joke , of your part , like " johno made it up , i ' m . " professor c: so , she 's now out here she 's moved , and she 'll be a student as of then . and probably she 'll pick up from you on the belief - net , so sh she 'll be chasing you down and like that . professor c: , against all traditions . and actually i talked today to a undergraduate who wants to do an honors thesis on this . professor c: so anyway , but she 's another one of these ones with a three point nine average and so on . , so , i ' ve give i ' ve given her some things to read . so we 'll see how this goes . there 's yet another one of the incoming first - year graduate students who 's expressed interest , so we 'll see how that goes . anyway , so , as far as this group goes , it 's certainly worth continuing for the next few weeks to get closure on the belief - net and the ideas that are involved in that , and what are th what are the concepts . we 'll see whether it 's gon na make sense to have this be separate from the other bigger effort with the formalization or not , i ' m not . it partly depends on w what your thesis turns out to be and how that goes . s so , we 'll see . and then , ami , you can decide , how much time you wanna put into it and , it 's beginning to take shap shape , so and , you will find that if you want to look technically at some of the your traditional questions in this light , keith , who 's buil building constructions , will be quite happy to see what , you envision as the issues and the problems and , how they might get reflected in constructions . i suspect that 's right . phd a: i may have to go to switzerland for in june or beginning of july for between two weeks and four weeks , but , after that or before that . professor c: fine . and , if it 's useful we can probably arrange for you to drop by and visit either at heidelberg or at the german ai center , while you 're in the neighborhood . phd a: be actu actually i ' m invited to do some consulting with a bank in geneva which has an affiliation with a research institute in geneva , which i forgot the name of . professor c: e o do y , we 're connected to there 's a there 's a very significant connection between we 'll we 'll go through this , icsi and epfl , which is the , it 's the fr ge - germany 's got two big technical institutes . there 's one in zurich , e t and then there 's one , the french speaking one , in lausanne , professor c: f l . so find out who they are associated with in geneva . probably we 're connected to them . professor c: and so anyway we c we can m undoubtedly get ami to give a talk at eml like that . while he 's in grad e: . the one you gave here a couple of weeks ago would be of interest there , too . professor c: a lot of interest . actually , either place , dfki or , so , and if there is a book , that you 'll be building up some audience for it . and you 'll get feedback from these guys . professor c: cuz they ' ve actually these dfki guys have done as much as anyone over the last decade in trying to build them . so we 'll set that up . so , unless we wanna start digging into the belief - net and the decisions now , which would be fine , it 's probably grad e: i tho it 's probably better if i come next week with the version o point nine of the structure . professor c: so , how about if you two guys between now and next week come up with something that is partially proposal , and partially questions , saying " here 's what we think we understand , here are the things we think we do n't understand " . and that we as a group will try to finish it . what i 'd like to do is shoot f for finishing all this next monday . , " these are the decisions " i do n't think we 're gon na get lots more information . it 's a design problem . we and let 's come up with a first cut at what this should look like . and then finish it up . does that so make sense ? grad e: and , the sem semester will be over next week but then you have projects for one more week to come ? grad g: no , i 'll be done everything by this by the end of this week . grad d: nnn . this , i ' ve i have projects , but then the my prof professor of one of my classes also wa has a final that he 's giving us . and he 's giving us five days to do it which means it going to be hard . grad d: so , the seventeenth will definitely be the last day , like it or not for me . professor c: so let 's do this , and then we there 's gon na be some separate co these guys are talking , we have a group on the formalization , nancy and johno and i are gon na talk about parsers . so there 're various kinds of , nothing gets done even in a meeting of seven people , so , two or three people is the size in which actual work gets done . professor c: so we 'll do that . , the other thing we wanna do is catch up with , ellen and see what she 's doing because the image schemas are going to be , an important pa professor c: we we want those , and we want them formalized and like that . so let me make a note to do that . grad b: , i ' m actually probably going to be in contact with her pretty soon anyway because of various of us students were going to have a reading group about precisely that thing over the summer , professor c: that 's great ! , i shweta mentioned that , although she said it 's a secret .
the translation of smartkom to english is in its final stages. the synthesis module will be the last one to do , after the english syntax trees are completed. the system is still buggy and unstable , but it will soon be ready for a demonstration. this is the first of two working demos required for the project. further than that , there are no restrictions on the focus of the research or its possible applications. for example , issues like spatial descriptions could be investigated. the variety of linguistic conventions seem to develop around an ego/allo-centric and a proximal/distal paradigm. the latter is also reflected in neuro-physiological data. from an engineering perspective , the belief-net for the ave task should be completed within a few weeks. the majority of the nodes are already there. this leaves the dependencies between them and the rules of computation to be set. since the whole system is going to be re-designed , there are major decisions to be taken regarding the parser and the ontology , as well as what can be re-used from past eml projects. in parallel , another team is working on formalisation and notation. finally , more ideas are expected to come from students and their research. the final english smartkom demo will be presented to the whole institute once the system is de-bugged and stabilised. after the demo , the focus of research can switch towards purely scientific goals , including issues on ontology , deep semantic constructions , execution engines etc. moreover , a new system will be designed for the project and at least some parts of it should be built. similarly , the construction analyser should be a single , general tool working for both the tourist domain and child language modelling. the focus for the next meeting will be on the belief-net , of which a working demo should be complete in the next few weeks. since there are not enough data , its connections and weights will have to be designed. although javabayes has been the tool of choice until now , the possibility that hugin could be a better option should be investigated. in order to promote the collaboration with eml , the group-ware server there will be updated with all progress being made in the two sites. a talk on some of the issues will also be organised to take place at dfki. the german smartkom version available on the server does not work. the english version , although still under development , does work , however , the system is still unstable as -apart from other reasons- it was initially built to work with a touch screen. de-bugging and cleaning up has to take place before any new modules are added on it. as regards the belief-net , no connections and dependencies have been built into it. these will have to be guessed instead of learnt through data , as not enough data is available for such a task. finally , it has been noted that the javabayes gui does not satisfy all the presentation requirements for this belief-net and modifying the underlying code would be too time-consuming. the german smartkom system has been translated to english up to the speech synthesis level. the german syntax trees are currently being adapted to english. these also contribute information to the synthesis module in order to achieve better pronunciation. the current english version is probably the best working one , since some of the problems with the original system have been corrected. the design of the belief-net has also progressed significantly: the vast majority of the nodes have been identified and the feasibility of the task from a technical point of view has been confirmed.
###dialogue: professor c: alright . good . ok so , let 's get started . nancy said she 's coming and that means she will be . my suggestion is that robert and johno give us a report on last week 's adventures to start . so everybody knows there were these guys f from heidelber - , actually from dfki , part of the german smartkom project , who were here for the week and , got a lot done . grad e: , so too . the we got to the point where we can now speak into the smartkom system , and it 'll go all the way through and then say something like " roman numeral one , am smarticus . " it actually says , " roemisch einz , am smarticus , " grad e: which means it 's just using a german sythesis module for english sentences . so , grad e: the sythesis is just a question of , hopefully it 's just a question of exchanging a couple of files , once we have them . and , it 's not going to be a problem because we decided to stick to the so - called concept to speech approach . so i ' m going backwards now , so " synthesis " is where you make this , make these sounds , and " concept to speech " is feeding into this synthesis module giving it what needs to be said , and the whole syntactic structure so it can pronounce things better , presumably . then , just with text to speech . grad e: and , johno learned how to write xml tags . , and did write the tree adjoining grammar for some sentences . no , right ? , for a couple grad d: so . bu - , i the way the , the dialogue manager works is it dumps out what it wants to know , or what it wants to tell the person , to a er in xml and there 's a conversion system for different , to go from xml to something else . and th so , the knowledge base for the system , that generates the syntasti syntactic structures for the ge generation is , in a lisp - like the knowledge base is in a lisp - like form . and then the thing that actually builds these syntactic structures is something based on prolog . so , you have a , a goal and it , says " ok , i ' m gon na try to do the greet - the - person goal , so it just starts , it binds some variables and it just decides to , do some subscold . , it just means " build the tree . " and then it passes the tree onto , the ge the generation module . grad e: but that out of the twelve possible utterances that the german system can do , we ' ve already written the syntax trees for three or four . grad d: we so , the syntax trees are very simple . it 's like most of the sentences in one tree , and instead of , breaking down to , like , small units and building back up , they took the sentences , and cut them in half , or , into thirds like that , and made trees out of those . and so , tilman wrote a little tool that you could take lisp notation and generate an xml , tree . , s what do ca structure from the lisp . and so you just say , " noun goes to " , er , nah , i do n't re i ' ve never been good at those . so there 's like the vp goes to n and those things in lisp , and it will generate for you . grad e: and because we 're sticking to that structure , the synthesis module does n't need to be changed . so all that f fancy , and the texas speech version of it , which is actually the simpler version , is gon na be done in october which is much too late for us . so . this way we worked around that . the , the system , show you the system . i actually want , at least , maybe , you should be able to start it on your own . if you wanna play around with it , in th in the future . right now it 's brittle and you need to ch start it up and then make ts twenty changes on seventeen modules before they actually can stomach it , anything . and send in a couple of side queries on some dummy center set - up program so that it actually works because it 's designed for this seevit thing , where you have the gestural recognition running with this s siemens virtual touch screen , which we do n't have here . and so we 're doing it via mouse , but the whole system was designed to work with this thing and it was it was a lot of engineering . no science in there whatsoever , but it 's working now , and , that 's the good news . so everything else actually did prove to be language independent except for the parsing and the generation . grad d: why i had i did need to chan generate different trees than the german ones , mainly because like , the gerund in german is automatically taken care of with just a regular verb , grad d: so i 'd have to add " am walking , " or i 'd have to add a little stem for the " am " , when i build the built the tree . grad b: , i noticed that , that some of the examples they had , non - english word orders and so on , . and then all that good . so . like . grad b: we say , i still do n't exactly understand the information flow in this thing , or what the modules are and so on . so , like just that such - and - such module decides that it wants to achieve the goal of greeting the user , and then magically it s , how does it know which syntactic structure to pull out , and all that ? professor c: but when you get free and you have the time either robert or johno or walk you through it . professor c: it 's eee messy but once you understand it . it 's it 's there 's nothing really complicated about it . grad b: and i remember one thing that came up in the talk last wednesday . , was this , he talked about the idea of like , he was talking about these lexicalized , tree adjoining grammars where you for each word you , grad b: for each lexical item , the lexical entry says what all the trees are that it can appear in . and , that 's not v that 's the opposite of constructional . that 's , that 's hpsg or whatever . professor c: now , we 're not committed for our research to do any of those things . so we are committed for our funding . professor c: , to n no , to just get the dem get the demos they need . so between us all we have t to get th the demos they need . if it turns out we can also give them lots more than that by , tapping into other things we do , that 's great . professor c: and , s deliberately . so , the reason i 'd like you to understand what 's going on in this demo system is not because it 's important to the research . it 's just for closure . so that if we come up with a question of " could we fit this deeper in there ? " . what the hell we 're talking about fitting in . so it 's just , in the sam same actually with the rest of us we just need to really understand what 's there . is there anything we can make use of ? , is there anything we can give back , beyond th the minimum requirements ? but none of that has a short time fuse . so th the demo requirements for this fall are taken care of as of later this week . and then so , it 's probably fifteen months until there 's another serious demo requirement . that does n't mean we do n't think about it for fifteen months , but it means we can not think about it for six months . professor c: so . the plan for this summer , really is to step back from the applied project , professor c: keep the d keep the context open , but actually go after the basic issues . and , so the idea is there 's this , other subgroup that 's worrying about formalizing the nota getting a notation . but in parallel with that , the hope is tha in particularly you will work on constructions in english ge - and german for this domain , but y not worry about parsing them or fitting them into smartkom or any of the other anything lik any other constraints for the time being . professor c: it 's hard enough to get it semantically and syntactically right and then and get the constructions in their form and . and , i don i do n't want you f feeling that you have to somehow meet all these other constraints . professor c: . and similarly with the parsing , we 're gon na worry about parsing , the general case , construction parser for general constructions . and , if we need a cut - down version for something , or whatever , we 'll worry about that later . so i 'd like to , for the summer turn into science mode . and i assume that 's also , your plan as . grad b: so , that like the meetings so far that i ' ve been at have been geared towards this demo , and then that 's going to go away pretty soon . grad e: it 's got . what i what is a good idea that show to anyone who 's interested , we can even make a an internal demo , and i show you what i do , i speak into it and you hear it talk , and walk f through the information . so , this is like in half hour or forty - five minutes . just fun . and so you when somebody on the streets com comes up to you and asks you what is smartkom so you can , give a sensible answer . professor c: so , c sh we could set that up as actually an institute wide thing ? just give a talk in the big room , and so peo people 's going on ? when you 're ready ? , that 's the thing that 's the level at which we can just li invite everybody and say " this is a project that we ' ve been working on and here 's a demo version of it " and like that . grad e: ok . d we do wanna have all the bugs out b where you have to pipe in extra xml messages from left and right before you 're makes sense . professor c: but any so that e it 's clear , then , . actually , roughly starting let 's say , nex next meeting , cuz this meeting we have one other thing to tie up besides the trip report . but starting next meeting we want to flip into this mode where . there are a lot of issues , what 's the ontology look like , what do the constructions look like , what 's the execution engine look like , mmm lots of things . but , more focused on an idealized version than just getting the demo out . now before we do that , let 's get back in ! but , it 's still , useful for you to understand the demo version enough , so that you can see what it is that it might eventually get retro - fitted into . professor c: and johno 's already done that , looked at the dem the looked at the smartkom . professor c: , the parser , and that . ok . anyway . so , the trip the report on these the last we interrupted you guys telling us about what happened last week . grad e: and if you just take the and i g i got the feeling that we are the only ones right now who have a running system . i what the guys in kaiserslautern have running because e the version that is , the full version that 's on the server d does not work . and you need to do a lot of to make it work . and so it 's and even tilman and ralf said " there never was a really working version that did it without th all the shortcuts that they built in for the october @ version " . so we 're actually maybe ahead of the system gruppe by now , the system the integration group . and it was , it was fun to some extent , but the outcome that is scientific interest is that both ralf and tilman , i know that they enjoyed it here , and they r they liked , a lot of the they saw here , what we have been thinking about , and they 're more than willing to , cooperate , by all means . and , part of my responsibility is to use our internal " group - ware " server at eml , make that open to all of us and them , so that whatever we discuss in terms of parsing and generating and constructions w we put it in there and they put what they do in there and maybe we can even , get some overlap , get some synergy out of that . and , the , if i find someone at in eml that is interested in that , i may even think that we could look take constructions and generate from them because the tree adjoining grammars that tilman is using is as you said nothing but a mathematical formalism . and you can just do anything with it , whether it 's syntactic trees , h p s g - like , or whether it 's construction . so if you ever get to the generation side of constructing things and there might be something of interest there , but in the moment we 're definitely focused on the understanding , pipeline . professor c: anyth - any other repo visit reports stories ? we so we now know , what the landscape is like . and so we just push on and , do what we need to do . and one of the things we need to do is the , and this is relatively tight tightly constrained , is to finish up this belief - net . so . and i was going to switch to start talking about that unless there 're m other more general questions . ok so here 's where we are on the belief - net as far as i understand it . going back i two weeks ago robert had laid out this belief - net , missing only the connections . right ? that is so , he 'd put all th all the dots down , and we went through this , and , more or less convinced ourselves that at least the vast majority of the nodes that we needed for the demo level we were thinking of , were in there . we may run across one or two more . but the connections were n't . so , bhaskara and i went off and looked at some technical questions about were certain operations legitimate belief - net computations and was there some known problem with them or had someone already , solved how to do this and . and so bhaskara tracked that down . the answer seems to be , " no , no one has done it , but yes it 's a perfectly reasonable thing to do if that 's what you set out to do " . and , so the current state of things is that , again , starting now , we 'd like to actually get a running belief - net for this particular subdomain done in the next few weeks . so bhaskara is switching projects as of the first of june , and , he 's gon na leave us an inheritance , which is a hopefully a belief - net that does these things . and there 're two aspects to it , one of which is , technical , getting the coding right , and making it run , and like that . and the other is the actual semantics . ok ? what all , what are the considerations and how and what are the ways in which they relate . so he doe h he does n't need help from this group on the technical aspects or if he does we 'll do that separately . but in terms of what are the decisions and like that , that 's something that we all have to work out . is is that right ? that 's both you guys ' understanding of where we are ? ok . grad g: so , i , is there like a latest version of the belief - net of the proposed belief - net ? like grad e: . , no , we did n't decide . we wanted to look into maybe getting it , the visualization , a bit clearer , but if we do it , a paper version of all the nodes and then the connections between them , that should suffice . professor c: we do in the long run wanna do better visualization and all that . that 's separable , grad d: i did look into that , in terms of , exploding the nodes out and down ag javabayes does not support that . imagine a way of hacking at the code to do that . it 'd probably take two weeks or so to actually go through and do it , grad d: and i went through all the other packages on murph - kevin murphy 's page , and i could n't find the necessary mix of free and with the gui and , with this thing that we want . professor c: , we can p if it 's if we can pay . if it 's paying a thousand dollars we can do that . so so do n't view free as a absolute constraint . grad d: ok . ok , so then i 'll go back and look at the ones on the list that grad g: how exp i do n't think it 's is it free ? because i ' ve seen it advertised in places so i it seems to professor c: , it may be free to academics . like i . i have a co i have a copy that i l i downloaded . professor c: so , at one point it was free . but yo i noticed people do use hugin so , professor c: and bhaskara can give you a pointer . so then , in any case , but paying a lit , if i if it 's probably for university , it 's gon na be real cheap anyway . but , if it 's fifty thousand dollars we are n't gon na do it . i ' m mean , we have no need for that . grad e: i also s would suggest not to d spend two weeks in changing the javabayes code . grad e: i will send you a pointer to a java applet that does that , it 's a fish - eye . you you have a node , and you click on it , and it shows you all the connections , and then if you click on something else that moves away , that goes into the middle . and maybe there is an easy way of interfacing those two . if that does n't work , it 's not a problem we need to solve right now . what i ' m what my job is , i will , give you the input in terms of the internal structure . maybe node by node , like this ? or should i collect it all and grad g: , just any like rough representation of the entire belief - net is probably best . grad e: and you 're gon na be around ? t again , always tuesdays and thursdays afternoon - ish ? as usual ? or will that change ? grad g: , like i c . this week i , i have a lot of projects and but after that i will generally be more free . so yes , i might be around . and g , generally if you email me also be around on other days . professor c: and this is not a crisis that , you do , e everybody who 's a student should , do their work , get their c courses all in good shape and then we 'll dig d dig down on this . grad e: , that 's no , that 's good . that means i have i h spend this week doing it . grad b: how do you go about this process of deciding what these connections are ? i know that there 's an issue of how to weight the different things too , and . right ? do you just and see if it professor c: there there 're two different things you do . one is you design and the other is you learn . so what we 're gon na do initially is do design , and , i if you will , . that is use your best knowledge of the domain to , hypothesize what the dependencies are and . if it 's done right , and if you have data then , there are techniques for learning the numbers given the structure and there are even techniques for learning the structure , although that takes a lot more data , and it 's not as @ and so on . so but for the limited amount of we have for this particular exercise we 'll just design it . grad e: fo - hopefully as time passes we 'll get more and more data from heidelberg and from people actually using it and . so but this is the long run . but to solve our problems ag a mediocre design will do in the beginning . grad b: that 's right . , and , speaking of data , are there i could swore , i could swear i saw it sitting on someone 's desk at some point , but is there a transcript of any of the , initial interactions of people with the system ? cuz , i ' m still itching to look at what look at the , and see what people are saying . professor c: - . make yourself a note . so and , keith would like the german as the english , so whatever you guys can get . grad b: u ok . , i found the , the audio of some of those , and , it sounded like i did n't want to trudge through that , . it was just strange , but . grad e: we probably will not get those to describe because they were trial runs . , but that 's th but we have data in english and german already . professor c: ok , so while we 're still at this top level , anything else that we oughta talk about today ? grad b: , wanted to , s like mention as an issue , last meeting i was n't here because i went to a linguistics colloquium on the fictive motion , grad b: and that was pretty interesting and , seems to me that will fairly be of relevance to what we 're doing here because people are likely to give descriptions like , " what 's that thing right where you start to go up the hill , " like that , meaning a few feet up the hill or whatever from some reference point and all that so , i ' m in terms of , people trying to state locations or , all that , this is gon na be very relevant . so , now that was the talk was about english versus japanese , which the japanese does n't affect us directly , except that , some of the construction he 'd what he talked about was that in english we say things like th , " your bike is parked across the street " and we use these prepositional phrases , " , if you were to move across the street you would be at the bike " , but in japanese the more conventionalized tendency is to use a description of " where one has crossed to the river , there is a tree " . , and , you can actually say things like , " there 's a tree where one has crossed the river , but no one has ever crossed the river " , like that . so the idea is that this really is that 's supposed show that 's it 's really fictive and so on . but that construction is also used in english , like " right where you start to go up the hill " , or " just when you get off the train " , like that to , to indicate where something is . grad e: , the deep map project which is undergoing some renovation at the moment , but this is a three language project : german , english , japanese . and , we have a , i have taken care that we have the japanese generation and . and so i looked into spatial description . so we can generate spatial descriptions , how to get from a to b . and and information on objects , in german , english , and japanese . and there is a huge project on spatial descriptions differences in spatial descriptions . , if yo if you 're interested in that , so how , it does go d all the way down to the conceptual level to some extent . grad e: but , we should leave japanese constructions maybe outside of the scope for now , but definitely it 's interesting to look at cross the bordered there . phd a: are are you going to p pay any attention to the relative position of the direction relative to the speaker ? , there are some differences between hebrew and english . we can say " park in front of the car " as you come beh you drive behind the car . in hebrew it means " park behind the car " , because to follow the car is defined as it faces you . phd a: while in english , front of the car is the absolute front of the car . so . phd a: so , i is german closer to e , to e i do n't think it 's related to syntax , though , so it may be entirely different . grad e: . did you ever get to look at the rou paper that i sent you on the on that problem in english and german ? carroll , ninety - three . i there is a study on the differences between english and german on exactly that problem . so it 's they actually say " the monkey in front of the car , where 's the monkey ? " and , they found statistically very significant differences in english and german , so i it might be , since there are only a finite number of ways of doing it , that german might be more like hebrew in that respect . the solution they proposed was that it was due to syntactic factors . grad e: that syntactic facto factors do play a role there , wh whether you 're more likely , to develop , choices that lead you towards using intrinsic versus extrinsic reference frames . grad b: , it seems to me that you can get both in english depending o , like , " in front of the car " could like , here 's the car sideways to me in between me and the car 's in front of the car , or whatever . i could see that , but but anyway , so , this was a very good talk on those kinds of issues and so on . grad e: also give you , a pointer to a paper of mine which is the ultimate taxonomy of reference frames . grad e: i ' m the only person in the world who actually knows how it works . not really . grad e: it 's called a it 's it 's spatial reference frames . you actually have only . if you wanna have a this is usually i should there should be an " l " , though . actually you have only have two choices . you can either do a two - point or a three - point which is you you 're familiar with th with the " origo " ? where that 's the center " origo " is the center of the f frame of reference . and then you have the reference object and the object to be localized . ok ? in some cases the origo is the same as the reference object . grad e: origo is a terminus technikus . in that sense , that 's even used in the english literature . origo . grad e: and , so , this video tape is in front of me . i ' m the origo and i ' m also the reference object . those are two - point . and three - point relations is if something has an intrinsic front side like this chair then your f shoe is behind the chair . and , reference object and . no , from my point of view your shoe is left of the chair . grad b: you you can actually say things like , " it 's behind the tree from me " like that , in certain circumstances in english , right ? as " from where i ' m standing it would appear that " grad e: and then and then here you on this scale , you have it either be ego or allocentric . and that 's it . so . egocentric two - point , egocentric three - point , or you can have allocentric . so , " as seen from the church , the town hall is right of that , fire station " . aa - it 's hardly ever used but it 's w professor c: . , why do n't you just put it on the web page ? there 's this edu grad e: it 's also all on my home page at eml . it 's called " an anatomy of a spatial description " . professor c: maybe just put a link on . , there something that i did n't know until about a week ago or so , is , there are separate brain areas for things within reach , and things that are out of reach . so there 's all this linguistic about , near and far , or yon and . so this is all this is there 's this linguistic facts . but , the . here 's the way the findings go . that , they do mri , and if you 're got something within reach then there 's one of your areas lights up , and if something 's out of reach a different one . but here 's the amazing result , they say . you get someone with a deficit so that they have a perfectly normal ability at distance things . so the s typical task is subdivision . so there 's a line on the wall over there , and you give them a laser pointer , and you say , " where 's the midpoint ? " and they do fine . if you give them the line , and they have to touch it , they ca n't . there 's just that part of the brain is n't functioning , so they ca n't do that . here 's the real experiment . the same thing on the wall , you give them a laser , " where is it ? " , they do it . give them a stick , long stick , and say " do it " , they ca n't do it . so there 's a remapping of distant space into nearby space . professor c: and so this doe this is , first of all , it explains something that i ' ve always wondered about and i 'll do this test on you guys as . how - i have had an experience , not often , but a certain number of times , when , i ' m working with a tool , a screwdriver , for a long time , i start feeling the tip directly . not indirectly , but you actually can feel the tip . and people who are accomplished violinists and like that , claim they also have this thing where you get a direct sensation of , physical sensation , of the end affector . phd a: the so . i it 's not exactly the th same thing , but s it 's getting close to that . professor c: i it feels like your as if your neurons had extended themselves out to this tool , and you 're feeling forces on it and you deal directly with it . phd a: i once i was playing with those devices that allow you to manipulate objects when it 's dangerous to get close ? so you can insert your hand something phd a: and there 's a correspondence between so i played with it . after a while , you do n't feel the difference anymore . it 's phd a: very you stop back and suddenly it goes away and you have to work again to recapture it , but . professor c: right , so anyway , so so this was the first actual experimental evidence i 'd seen that was consistent with this anecdotal . and it makes a lovely def story about why languages , make this distinction . there are behavioral differences too . things you can reach are really quite different than things you ca n't . but there seems to be an actu really deep embodied neural difference . and i this is , so . in addition to the e professor c: exactly . so in addition to e ego and allocentric which appear all over the place , you also have this proximal - distal thing which is very deeply embedded . s grad e: , dan montello , he does the th the cognitive map world , down in santa barbara . and he always talks about these he he already i probably most likely without knowing this evidence is talking about these small scale spaces that you can manipulate versus large scale environmental spaces . professor c: there 's been a lot of behavioral things o on this , but that was the first neur neuro - physiological thing i saw . anyway , so we 'll look at this . and . so , all of these issues now are now starting to come up . so , now we 're now done with demos . we 're starting to do science , and so these issues about , reference , and spatial reference , discourse reference , - all this , deixis which is part of what you were talking about , so , all of this is coming up essentially starting now . so we got ta do all this . so there 's that . and then there 's also a set of system things that come up . so " ok , we 're not using their system . that means we need our system . " it it follows . and so , in addition to the business about just getting the linguistics right , and the formalism and , we 're actually gon na build something and , johno is point person on the parser , analyzer , whatever that is , and we 're gon na start on that in parallel with the , the grammar . but to do that we 're gon na need to make some decisions like ontology , so , and so this is another thing where we 're gon na , have to get involved and make s relatively early , make some decisions on , " is there an ontology api that " there 's a standard way of getting things from ontologies and we build the parser and around that , or is there a particular ontology that we 're gon na standardize on , and if so , is there something that we can use there . i does either the smartkom project or one of the projects at eml have something that we can just p pull out , for that . , so there are gon na be some things like that , which are not science but system . but we are n't gon na ignore those cuz we 're not only going the plan is not only to lay out this thing , but to actually build some of it . and how much we build , and . professor c: . part of it , if it works right , is wh it looks like we 're now in a position that the construction analyzer that we want for this applied project can be the same as the construction analyzer that nancy needs for the child language modeling . so . it 's always been out of phase but it now seems that , there 's a good shot at that . so we ' ve talked about it , and the hope is that we can make these things the same thing , and it 's only w in both cases it 's only one piece of a bigger system . but it would be if that piece were exactly the same piece . it was just this construction analyzer . and so we think we have a shot at that . so . the for to to come full circle on that , this formalization task , is trying to get the formalism into a shape where it can actually professor c: d , where it actually is covers the whole range of things . and the thing that got mark into the worst trouble is he had a very ambitious thing he was trying to do , and he insisted on trying to do it with a limited set of mechanisms . it turned out , inherently not to cover the space . and it just it was just terribly frustrating for him , and he seemed fully committed to both sides of this i irreconcilable thing . professor c: so there 's , deep , really deep , emotional commitment to a certain theory being , complete . professor c: we - it has n't it certainly has n't been observed , in any case . grad f: i have a problem , then . it 's so . whether i do depends on whether i ' m talking to him or him probably . professor c: why a actually , , you do but , th the thing you have to i m implement is so small that . grad f: which meeting i ' m in . it 's ok to be purist within that context . professor c: but to try to do something upscale and purist particularly if what you 're purist about does n't actually work , is real hard . professor c: and then the other thing is while we 're doing this robert 's gon na pick a piece of this space , professor c: , for his absentee thesis . you all know that you can just , in germany almost just send in your thesis . grad e: the - th there there 's a drive - in thesis sh joint over in saarbruecken . professor c: it costs a lot . the the amount you put in your credit card and as . but , but anyway , so , that 's , also got ta be worked out , hopefully over the next few weeks , so that it becomes clear , what piece , robert wants to jump into . and , while we 're at this level , there 's at least one new doctoral student in computer science who will be joining the project , either next week or the first of august , depending on the blandishments of microsoft . so , de . and her name is eva . it really is . nobody believed th that grad f: , it had to be a joke , of your part , like " johno made it up , i ' m . " professor c: so , she 's now out here she 's moved , and she 'll be a student as of then . and probably she 'll pick up from you on the belief - net , so sh she 'll be chasing you down and like that . professor c: , against all traditions . and actually i talked today to a undergraduate who wants to do an honors thesis on this . professor c: so anyway , but she 's another one of these ones with a three point nine average and so on . , so , i ' ve give i ' ve given her some things to read . so we 'll see how this goes . there 's yet another one of the incoming first - year graduate students who 's expressed interest , so we 'll see how that goes . anyway , so , as far as this group goes , it 's certainly worth continuing for the next few weeks to get closure on the belief - net and the ideas that are involved in that , and what are th what are the concepts . we 'll see whether it 's gon na make sense to have this be separate from the other bigger effort with the formalization or not , i ' m not . it partly depends on w what your thesis turns out to be and how that goes . s so , we 'll see . and then , ami , you can decide , how much time you wanna put into it and , it 's beginning to take shap shape , so and , you will find that if you want to look technically at some of the your traditional questions in this light , keith , who 's buil building constructions , will be quite happy to see what , you envision as the issues and the problems and , how they might get reflected in constructions . i suspect that 's right . phd a: i may have to go to switzerland for in june or beginning of july for between two weeks and four weeks , but , after that or before that . professor c: fine . and , if it 's useful we can probably arrange for you to drop by and visit either at heidelberg or at the german ai center , while you 're in the neighborhood . phd a: be actu actually i ' m invited to do some consulting with a bank in geneva which has an affiliation with a research institute in geneva , which i forgot the name of . professor c: e o do y , we 're connected to there 's a there 's a very significant connection between we 'll we 'll go through this , icsi and epfl , which is the , it 's the fr ge - germany 's got two big technical institutes . there 's one in zurich , e t and then there 's one , the french speaking one , in lausanne , professor c: f l . so find out who they are associated with in geneva . probably we 're connected to them . professor c: and so anyway we c we can m undoubtedly get ami to give a talk at eml like that . while he 's in grad e: . the one you gave here a couple of weeks ago would be of interest there , too . professor c: a lot of interest . actually , either place , dfki or , so , and if there is a book , that you 'll be building up some audience for it . and you 'll get feedback from these guys . professor c: cuz they ' ve actually these dfki guys have done as much as anyone over the last decade in trying to build them . so we 'll set that up . so , unless we wanna start digging into the belief - net and the decisions now , which would be fine , it 's probably grad e: i tho it 's probably better if i come next week with the version o point nine of the structure . professor c: so , how about if you two guys between now and next week come up with something that is partially proposal , and partially questions , saying " here 's what we think we understand , here are the things we think we do n't understand " . and that we as a group will try to finish it . what i 'd like to do is shoot f for finishing all this next monday . , " these are the decisions " i do n't think we 're gon na get lots more information . it 's a design problem . we and let 's come up with a first cut at what this should look like . and then finish it up . does that so make sense ? grad e: and , the sem semester will be over next week but then you have projects for one more week to come ? grad g: no , i 'll be done everything by this by the end of this week . grad d: nnn . this , i ' ve i have projects , but then the my prof professor of one of my classes also wa has a final that he 's giving us . and he 's giving us five days to do it which means it going to be hard . grad d: so , the seventeenth will definitely be the last day , like it or not for me . professor c: so let 's do this , and then we there 's gon na be some separate co these guys are talking , we have a group on the formalization , nancy and johno and i are gon na talk about parsers . so there 're various kinds of , nothing gets done even in a meeting of seven people , so , two or three people is the size in which actual work gets done . professor c: so we 'll do that . , the other thing we wanna do is catch up with , ellen and see what she 's doing because the image schemas are going to be , an important pa professor c: we we want those , and we want them formalized and like that . so let me make a note to do that . grad b: , i ' m actually probably going to be in contact with her pretty soon anyway because of various of us students were going to have a reading group about precisely that thing over the summer , professor c: that 's great ! , i shweta mentioned that , although she said it 's a secret . ###summary: the translation of smartkom to english is in its final stages. the synthesis module will be the last one to do , after the english syntax trees are completed. the system is still buggy and unstable , but it will soon be ready for a demonstration. this is the first of two working demos required for the project. further than that , there are no restrictions on the focus of the research or its possible applications. for example , issues like spatial descriptions could be investigated. the variety of linguistic conventions seem to develop around an ego/allo-centric and a proximal/distal paradigm. the latter is also reflected in neuro-physiological data. from an engineering perspective , the belief-net for the ave task should be completed within a few weeks. the majority of the nodes are already there. this leaves the dependencies between them and the rules of computation to be set. since the whole system is going to be re-designed , there are major decisions to be taken regarding the parser and the ontology , as well as what can be re-used from past eml projects. in parallel , another team is working on formalisation and notation. finally , more ideas are expected to come from students and their research. the final english smartkom demo will be presented to the whole institute once the system is de-bugged and stabilised. after the demo , the focus of research can switch towards purely scientific goals , including issues on ontology , deep semantic constructions , execution engines etc. moreover , a new system will be designed for the project and at least some parts of it should be built. similarly , the construction analyser should be a single , general tool working for both the tourist domain and child language modelling. the focus for the next meeting will be on the belief-net , of which a working demo should be complete in the next few weeks. since there are not enough data , its connections and weights will have to be designed. although javabayes has been the tool of choice until now , the possibility that hugin could be a better option should be investigated. in order to promote the collaboration with eml , the group-ware server there will be updated with all progress being made in the two sites. a talk on some of the issues will also be organised to take place at dfki. the german smartkom version available on the server does not work. the english version , although still under development , does work , however , the system is still unstable as -apart from other reasons- it was initially built to work with a touch screen. de-bugging and cleaning up has to take place before any new modules are added on it. as regards the belief-net , no connections and dependencies have been built into it. these will have to be guessed instead of learnt through data , as not enough data is available for such a task. finally , it has been noted that the javabayes gui does not satisfy all the presentation requirements for this belief-net and modifying the underlying code would be too time-consuming. the german smartkom system has been translated to english up to the speech synthesis level. the german syntax trees are currently being adapted to english. these also contribute information to the synthesis module in order to achieve better pronunciation. the current english version is probably the best working one , since some of the problems with the original system have been corrected. the design of the belief-net has also progressed significantly: the vast majority of the nodes have been identified and the feasibility of the task from a technical point of view has been confirmed.
41
grad b: so , i what it is . but all i know is that it seems like every time i am up here after a meeting , and i start it , it works fine . and if i ' m up here and i start it and we 're all sitting here waiting to have a meeting , it gives me that error message and i have not yet sat down with been able to get that error message in a point where sit down and find out where it 's occurring in the code . postdoc e: was it a pause , or ? ok . was it on " pause " ? professor d: so so the , the new procedural change that just got suggested , which is a good idea is that , we do the digit recordings at the end . and that way , if we 're recording somebody else 's meeting , and a number of the participants have to run off to some other meeting and do n't have the time , then they can run off . it 'll mean we 'll get somewhat fewer , sets of digits , but , that way we 'll cut into people 's time , if someone 's on strict time , less . so , i th i think we should start doing that . , so , let 's see , we were having a discussion the other day , maybe we should bring that up , about , the nature of the data that we are collecting . @ that , we should have a fair amount of data that is , collected for the same meeting , so that we can , i . wh - what were some of the points again about that ? is it phd f: , ok , i 'll back up . , at the previous at last week 's meeting , this meeting i was griping about wanting to get more data and i talked about this with jane and adam , and was thinking of this mostly just so that we could do research on this data , since we 'll have a new this new student di does wanna work with us , phd f: and he 's already funded part - time , so we 'll only be paying him for half of the normal part - time , phd f: so he 's comes from a signal - processing background , but i liked him a lot cuz he 's very interested in higher level things , like language , and disfluencies and all kinds of eb maybe prosody , phd f: so he 's just getting his feet wet in that . anyway , ok , maybe we should have enough data so that if he starts he 'd be starting in january , next semester that we 'd have , enough data to work with . phd f: but , jane and adam brought up a lot of good points that just posting a note to berkeley people to have them come down here has some problems in that you m you need to make that the speakers are who you want and that the meeting type is what you want , and . so , about that and it 's still possible , but i 'd rather try to get more regular meetings of types that we know about , and hear , then a mish - mosh of a bunch of one - time phd f: , just because it would be very hard to process the data in all senses , both to get the , to figure out what type of meeting it is and to do any higher level work on it , like , i was talking to morgan about things like summarization , or what 's this meeting about . it 's very different if you have a group that 's just giving a report on what they did that week , versus coming to a decision and . so . then i was , talking to morgan about some new proposed work in this area , a separate issue from what the student would be working on where i was thinking of doing some summarization of meetings or trying to find cues in both the utterances and in the utterance patterns , like in numbers of overlaps and amount of speech , raw cues from the interaction that can be measured from the signals and from the diff different microphones that point to hot spots in the meeting , or things where is going on that might be important for someone who did n't attend to listen to . and in that , regard , we definitely w will need it 'd b it 'd be for us to have a bunch of data from a few different domains , or a few different kinds of meetings . so this meeting is one of them , although i ' m not participate if i , i would feel very strange being part of a meeting that you were then analysing later for things like summarization . , and then there are some others that menti that morgan mentioned , like the front - end meeting and maybe a networking group meeting . phd f: but , for anything where you 're trying to get a summarization of some meeting meaning out of the meeting , it would be too hard to have fifty different kinds of meetings where we did n't really have a good grasp on what does it mean to summarize , but rather we should have different meetings by the same group but hopefully that have different summaries . and then we need a couple that of we do n't wanna just have one group because that might be specific to that particular group , but @ three or four different kinds . phd f: see , i ' ve never listened to the data for the front - end meeting . phd f: ok . but maybe that 's enough . so , in general , i was thinking more data but also data where we hold some parameters constant or fairly similar , like a meeting about of people doing a certain work where at least half the participants each time are the same . professor d: now , let l let me just give you the other side to that cuz i ca because i do n't disagree with that , but there is a complimentary piece to it too . , for other kinds of research , particularly the acoustic oriented research , i actually feel the opposite need . i 'd like to have lots of different people . professor d: as many people here a and talking about the thing that you were just talking about it would have too few people from my point of view . i 'd like to have many different speakers . so , i would also very much like us to have a fair amount of really random scattered meetings , of somebody coming down from campus , and , professor d: but if we only get one or two from each group , that still could be useful acoustically just because we 'd have close and distant microphones with different people . postdoc e: can i say about that the issues that adam and i raised were more a matter of advertising so that you get more native speakers . because if you just say an and in particular , my suggestion was to advertise to linguistics grad students because there you 'd have so people who 'd have proficiency enough in english that , it would be useful for purposes . postdoc e: but , i ' ve been i ' ve gathered data from undergrads at on campus and if you just post randomly to undergrads you 'd get such a mixed bag that it would be hard to know how much conversation you 'd have . and and the english you 'd have the language models would be really hard to build professor d: , ok , first place , i do n't think we 'd just want to have random people come down and talk to one another , there should be a meeting that has some goal and point cuz that 's what we 're investigating , phd f: it has to be a pre - existing meeting , like a meeting that would otherwise happen anyway . professor d: so , . so i was thinking more in terms of talking to professors , and , senior , d and , doctoral students who are leading projects and offering to them that they have their hold their meeting down here . professor d: , that 's the first point . the second point is that for some time now , going back through berp that we have had speakers that we ' ve worked with who had non - native accents and i th that postdoc e: , . i ' m not saying accents . u the accent 's not the problem . postdoc e: no , it 's more a matter of , proficiency , e just simply fluency . , i deal with people on campus who sometimes people , undergraduates in computer science , have language skills that make , that their fluency and writing skills are not so strong . postdoc e: but , it 's like when you get into the graduate level , no problem . , i ' m not saying accents . grad b: , that the only thing we should say in the advertisement is that the meeting should be held in english . and and if it 's a pre - existing meeting and it 's held in english , it 's probably ok if a few of the people do n't have , g particularly good english skills . postdoc e: ok , now can i say the other aspect of this from my perspective which is that , there 's this issue , you have a corpus out there , it should be used for multiple things cuz it 's so expensive to put together . postdoc e: and if people want to approach , i so i know e this the idea of computational linguistics and probabilistic grammars and all may not be the focus of this group , but the idea of language models , which are fund generally speaking , t terms of like the amount of benefit per dollar spent or an hour invested in preparing the data , if you have a choice between people who are pr more proficient in { nonvocalsound } , i more fluent , more close to being academic english , then it would seem to me to be a good thing . postdoc e: because otherwise y you do n't have the ability to have , so if you have a bunch of idiolects that 's the worst possible case . if you have people who are using english as a as an interlanguage because they do n't , they ca n't speak in their native languages and but their interlanguage is n't really a match to any existing , language model , this is the worst case scenario . postdoc e: i ' m just thinking that we have to think at a higher level view , could we have a language model , a grammar , that , wo would be a possibility . so y so if you wanted to bring in a model like dan jurafsky 's model , an and do some top - down , it to help th the bottom - up and merge the things or whatever , it seems like , i do n't see that there 's an argument i ' m i what is that why not have the corpus , since it 's so expensive to put together , useful for the widest range of central corp things that people generally use corpora for and which are , used in computational linguistics . that 's that 's my point . which which includes both top - down and bottom - up . professor d: ok , i let 's see what we can get . , it that if we 're aiming at , groups of graduate students and professors and who are talking about things together , and it 's from the berkeley campus , probably most of it will be ok , postdoc e: yes , that 's fine . that 's fine . exactly . and my point in m in my note to liz was that undergrads are an iff iffy population . grad b: , not to mention the fact that i would be hesitant certainly to take anyone under eighteen , probably even an anyone under twenty - one . grad b: what 's that ? , age - ist . the " eighteen " is because of the consent form . phd f: i have a , question . , morgan , you were mentioning that mari may not use the k equipment from ibm if they found something else , cuz there 's a professor d: they 're they 're , they 're d they 're assessing whether they should do that or y do something else , hopefully over the next few weeks . phd f: cuz , one remote possibility is that if we st if we inherited that equipment , if she were n't using it , could we set up a room in the linguistics department ? and and , there may be a lot more or in psych , or in comp wherever , in another building where we could , record people there . we 'd have a better chance grad b: we 'd need a real motivated partner to do that . we 'd need to find someone on campus who was interested in this . phd f: right , but but if there were such a it 's a remote possibility , then , one of us could , go up there and record the meeting rather than bring all of them down here . so it 's just a thought if they end up not using the hardware . professor d: , the other thing that i was hoping to do in the first place was to turn it into some portable thing so you could wheel it around . but . , and grad b: , i know that space is really scarce on at least in cs . , to actually find a room that we could use regularly might actually be very difficult . phd f: the idea is , if they have a meeting room and they can guarantee that the equipment will be safe and , and if one of us is up there once a week to record the meeting phd c: but i you need , another portable thing a another portable equipment to do , more e easier the recording process , out from icsi . and probably . i . , if you want to record , a seminar or a class , in the university , you need it - it would be very difficult to put , a lot of , head phones in different people when you have to record only with , this , d device . grad b: , but if we wanna just record with the tabletop microphones , that 's easy . right ? that 's very easy , professor d: actually , that 's a int that raises an interesting point that came up in our discussion that 's maybe worth repeating . we realized that , when we were talking about this that , ok , there 's these different things that we want to do with it . so , it 's true that we wanna be selective in some ways , the way that you were speaking about with , not having an interlingua and , these other issues . but on the other hand , it 's not necessarily true that we need all of the corpus to satisfy all of it . so , a as per the example that we wanna have a fair amount that 's done with a small n recorded with a small , typ number of types of meetings but we can also have another part that 's , just one or two meetings of each of a range of them and that 's ok too . , i we realized in discussion that the other thing is , what about this business of distant and close microphones ? , we really wanna have a substantial amount recorded this way , that 's why we did it . but what about for th for these issues of summarization , a lot of these higher level things you do n't really need the distant microphone . professor d: you d you do n't ne it does n't you just need some microphone , somewhere . phd f: you can use , but that any data that we spend a lot of effort { nonvocalsound } to collect , each person who 's interested in , we have a cou we have a bunch of different , slants and perspectives on what it 's useful for , they need to be taking charge of making they 're getting enough of the data that they want . and so in my case , there w there is enough data for some kinds of projects and not enough for others . phd f: and so { nonvocalsound } i ' m looking and thinking , " i 'd be glad to walk over and record people and so { nonvocalsound } forth if it 's to help th in my interest . " and other people need to do that for themselves , h or at least discuss it so that we can find some optimal professor d: right . so that but that i ' m raising that cuz it 's relevant exactly for this idea up there that if you think about , " , gee , we have this really complicated setup to do , " maybe you do n't . professor d: maybe if if really all you want is to have a recording that 's good enough to get a , a transcription from later , you just need to grab a tape recorder and go up and make a recording . , we could have a fairly we could just get a dat machine and phd f: , i agree with { nonvocalsound } jane , though , on the other hand that so that might be true , you may say , summarization , that sounds very language oriented . you may say , " , you just do that from transcripts of a radio show . " , you do n't even need the speech signal . but what you what i was thinking is long term what would be neat is to be able to pick up on suppose you just had a distant microphone there and you really wanted to be able to determine this . there 's lots of cues you 're not gon na have . so i do think that long term you should always try to satisfy the greatest number of interests and have this parallel information , which is really what makes this corpus powerful . professor d: but that the i we ca n't really underestimate the difficulty should n't really u underestimate the difficulty of getting a setup like this up . and so , it took quite a while to get that together and to say , " , we 'll just do it up there , " if you 're talking about something simple , where you throw away a lot of these dimensions , then you can do that right away . talking about something that has all of these different facets that we have here , it wo n't happen quickly , it wo n't be easy , and there 's all sorts of issues about th keeping the equipment safe , or else hauling it around , and all sorts of o professor d: the first priority should be to pry to get try to get people to come here . phd f: , i and we can get people to come here , that but the issue is you definitely wanna make that the group you 're getting is the right group so that you do n't waste a lot of your time { nonvocalsound } and the overhead in bringing people down . grad b: , i had a i spoke with some people up at haas business school who volunteered . should i pursue that ? grad b: they they originally they ' ve decided not to do go into speech . so i ' m not whether they 'll still be so willing to volunteer , but i 'll send an email and ask . grad b: i 'll tell them about the free lunch . and they 'll say there 's no such thing . phd f: i 'd love to get people that are not linguists or engineers , cuz these are both weird professor d: , the they make funny sounds . the o the other the other thing is , that we talked about is give to them , burn an extra cd - rom . professor d: and give them so if they want a { nonvocalsound } and audio record of their phd f: , that was he meant , " give them a music cd , " like they g then he said a cd of the of their speech and i it depends of what audience you 're talking to , but , i personally { nonvocalsound } would not want a { nonvocalsound } cd of my meeting , grad b: it 'd be fun . it would just be fun , if nothing else , . it 's a novelty item . professor d: but it als it it also builds up towards the goal . we 're saying , " look , you 're gon na get this . is - is is n't that neat . then you 're gon na go home with it . it 's actually p it 's probably gon na be pretty useless to you , but you 'll ge appreciate , where it 's useful and where it 's useless , and then , we 're gon na move this technology , so it 'll become useful . " so . phd a: what if you could tell them that you 'll give them the transcripts when they come back ? grad b: , . , anyone can have the transcripts . so . we could point that out . postdoc e: i hav i have to raise a little eensy - weensy concern about doing th giving them the cd immediately , because of these issues of , this , where maybe ? grad b: that 's a good point . right , it ca n't be the internal one . postdoc e: , that 's right , say " , i got this cd , and , your honor , i " professor d: so , let 's see . so that was that topic , and then , i another topic would be where are we in the whole disk resources question for grad b: we are slowly getting to the point where we have enough sp room to record meetings . so i did a bunch of archiving , and still doing a bunch of archiving , i ' m in the midst of doing the p - files from , broadcast news . and it took eleven hours to do to copy it . grad b: , it 's abbott . it 's abbott , so it just but it 's a lot of data . professor d: sk - it 's copying from one place on abbott to another place on abbott ? grad b: so i ' m archiving it , and then i ' m gon na delete the files . so that will give us ten gigabytes of free space . grad b: and and and so one that that will be done , like , in about two hours . and so , at that point we 'll be able to record five more meetings . postdoc e: one thing the good news about that is that once it 's archived , it 's pretty quick to get back . grad b: , especially because i ' m generating a clone , also . so . and that takes a while . postdoc e: now , what will is the plan to g to so will be saved , it 's just that you 're relocating it ? , so we 're gon na get more disk space ? or did i ? grad b: no , the these are the p - files from broadcast news , which are regeneratable grad b: , if we really need to , but we had a lot of them . and for the full , hundred forty hour sets . and so they were two gigabytes per file and we had six of them . professor d: w w we are getting more space . we are getting , another disk rack and four thirty - six gigabyte disks . so but that 's not gon na happen instantaneously . grad b: the sun , ha , takes more disks than the andatico one did . the sun rack takes th - one took four and one took six , or maybe it was eight and twelve . whatever it was , fifty percent more . grad b: , what happened is that we bought all our racks and disks from andatico for years , according to dave , and andatico got bought by another company and doubled their prices . and so , we 're looking into other vendors . we by " we " dave . phd a: - . i ' ve been looking at the , aurora data and , first look at it , there were three directories on there that could be moved . one was called aurora , one was spanish , which was carmen 's spanish , and the other one was , spine . phd a: and so , i wrote to dan and he was very concerned that the spine was moving to a non - backed - up disk . so , i realized that , probably not all of that should be moved , just the cd - rom type data , the static data . so i moved that , and then , i asked him to check out and see if it was ok . before i actually deleted the old , but i have n't heard back yet . i told him he could delete it if he wanted to , i have n't checked today to see if he 's deleted it or not . and then carmen 's , i realized that when i had copied all of her to xa , i had copied there that was dynamic data . and so , i had to redo that one and just copy over the static data . and so i need to get with her now and delete the old off the disk . and then i lo have n't done any of the aurora . i have to meet with , stephane to do that . so . professor d: so , but , y you 're figuring you can record another five meetings with the space that you 're clearing up from the broadcast news , but , we have some other disks , some of which you 're using for aurora , but are we g do we have some other space now ? grad b: so , so , we have space on the current disk right now , where meeting recorder is , and that 's probably enough for about four meetings . grad b: no , no , it 's wherever the meeting recorder currently is . it 's di . phd a: ok , i but the i ' m moving from aurora is on the dc disk that we grad b: i do n't remember . th - it 's dc - it 's whatever that one is . grad b: do n't remember , it might be dc . and that has enough for about four more meetings right now . , we were at a hundred percent and then we dropped down to eighty - six for reasons i do n't understand . , someone deleted something somewhere . and so we have some room again . and then with broadcast news , that 's five or six more meetings , so , we have a couple weeks . , so , i think we 're ok , until we get the new disk . phd a: so should , one question i had for you was , we need we sh probably should move the aurora an and all that other off of the meeting recorder disk . is there another backed - up disk that of that would ? grad b: we should put it onto the broadcast news one . that 's probably the best thing to do . and that way we consolidate meeting recorder onto one disk rather than spreading them out . grad b: but , so we could ' jus just do that at the end of today , once the archive is complete , and i ' ve verified it . cuz that 'll give us plenty of disk . professor d: , ok , @ so , then i th the last thing i 'd had on my agenda was just to hear an update on what jose has been doing , phd c: i have , the result of my work during the last days . for your information because i read . , and the last , days , i work , in my house , in a lot of ways and thinking , reading , different things about the meeting recording project . and i have , some ideas . , this information is very useful . because you have the distribution , now . phd c: but for me , is interesting because , here 's i is the demonstration of the overlap , problem . phd c: it 's a real problem , a frequently problem , because you have overlapping zones , all the time . phd c: , by a moment i have , nnn , the , n i did a mark of all the overlapped zones in the meeting recording , with , a exact mark . phd c: heh ? that 's , yet b , by b by hand because , " why . " phd c: my my idea is to work i do i do n't @ i , if , it will be possible because i have n't a lot , enough time to work . , only just , six months , as , but , my idea is , is very interesting to work in the line of , automatic segmenter . but , in my opinion , we need , a reference session to t to evaluate the tool . grad b: yes , . and so are you planning to do that or have you done that already ? phd c: i plan , but , the idea is the following . now , i need ehm , to detect all the overlapping zones exactly . i will , talk about , in the blackboard about the my ideas . phd c: , this information , with , exactly time marks , for the overlapping zones overlapping zone , and , a speaker a pure speech , speaker zone . , zones of speech of , one speaker without any , noise , any acoustic event that , w , is not , speech , real speech . and , i need t true , silence for that , because my idea is to study the nnn the set of parameters , what , are more m more discriminant to , classify . the overlapping zones in cooperation with the speech zones . the idea is to use , i ' m not to yet , but my idea is to use a cluster algorithm or , nnn , a person strong in neural net algorithm to study what is the , the property of the different feat feature , to classify speech and overlapping speech . phd c: and my idea is , it would be interesting to have , a control set . and my control set , will be the , silence without , any noise . postdoc e: . that 's interesting . this is like a ground level , with it 's not it 's not total silence . phd c: , noise , claps , tape clips , the difference , event , which , has , a hard effect of distorti spectral distortion in the speech . phd c: , i have mark in that not in all the file , only , nnn , mmm , i have , ehm i do n't remind what is the quantity , but , i have marked enough speech on over and all the overlapping zones . i have , two hundred and thirty , more or less , overlapping zones , and is similar to this information , phd c: because with the program , i cross the information of , of jane with , my segmentation by hand . and is , mor more similar . phd c: and the idea is , i will use , i want my idea is , to { nonvocalsound } to classify . phd c: i need , the exact , mark of the different , zones because i want to put , for , each frame a label indicating . it 's a sup supervised and , hierarchical clustering process . i put , for each frame { nonvocalsound } a label indicating what is th the type , what is the class , which it belong . , the class you will { nonvocalsound } overlapping speech " overlapping " is a class , " speech " { nonvocalsound } @ the class that 's phd c: because , my idea is , in the first session , i need , to be that the information , that , i will cluster , is right . because , if not , i will , return to the speech file to analyze , what is the problems , phd c: . and i 'd prefer i would prefer , the to have , this labeled automatically , but , fro th i need truth . postdoc e: i ' ve got ta ask you . so , the difference between the top two , i so so i start at the bottom , so " silence " is clear . by " speech " do you mean speech by one sp by one person only ? postdoc e: so this is un ok , and then the top includes people speaking at the same time , or a speaker and a breath overlapping , someone else 's breath , or clicking , overlapping with speech so , that 's all those possibilities in the top one . phd c: one , two , three . but no , by th by the moment n . , in the first moment , because , i have information , of the overlapping zones , information about if the , overlapping zone is , from a speech , clear speech , from a one to a two speaker , or three speaker , or is the zone where the breath of a speaker , overlaps , onto , a speech , another , especially speech . postdoc e: so it 's basi it 's speech wi som with something overlapping , which could be speech but does n't need to be . professor d: no , but there 's but , she 's saying " where do you in these three categories , where do you put the instances in which there is one person speaking and other sounds which are not speech ? " phd c: , he here i put speech from , one speaker without , any events more . professor d: right , so where do you put speech from one speaker that does have a nonspeech event at the same time ? phd c: for for the by the @ no , @ because i want to limit the nnn , the study . grad b: , so that 's what he was saying before , is that he excluded those . phd c: , be why ? bmr006cdialogueact826 203052 203084 c phd qw -1 0 why ? bmr006cdialogueact827 203084 203153 c phd qw -1 0 what 's the reason ? because i it 's the first study . the first professor d: , no , it 's a perfectly sensible way to go . we just wondered trying to understand what you were doing . postdoc e: we 're just cuz you ' ve talked about other overlapping events in the past . so , this is a subset . phd c: but , the first idea because , i what hap what will happen with the study . phd c: it 's the control set . ok ? it 's pure si pure silence with the machine on the roof . professor d: what you w what you m what you mean is that it 's nonspeech segments that do n't have impulsive noises . professor d: right ? cuz you 're calling what you 're calling " event " is somebody coughing or clicking , or rustling paper , or hitting something , which are impulsive noises . but steady - state noises are part of the background . which , are being , included in that . phd c: h here yet , yet i think , there are that some noises that , do n't wanted to be in that , in that control set . phd c: but i prefer , i prefer at the first , the silence with , this the of noise . professor d: right , it 's " background " might be a better word than " silence " . it 's just that the background acoustic phd c: is is only and , with this information the idea is , nnn , i have a label for each , frame and , with a cluster algorithm i and phd c: and i am going to prepare a test bed , , a set of feature structure , models . and my idea is phd c: so on because i have a pitch extractor yet . i have to test , but i phd c: i ha i have prepare . is a modified version of a pitch tracker , from , standar - stanford university in stanford ? no . from , , cambridge university . phd c: , i do n't remember what is the name of the author , because i have several i have , , library tools , from , festival and of from edinburgh , from cambridge , and from our department . phd c: and i have to because , in general the pitch tracker , does n't work very and grad b: bad . but , as a feature , it might be ok . so , we . phd c: this this is and th the idea is to , to obtain , , diff , different , no , a great number of fec , , twenty - five , thirty parameters , for each one . and in a first , nnn , step in the investi in the research in , my idea is try to , to prove , what is the performance of the difference parameter , to classify the different , what is the front - end approach to classify , the different , frames of each class and what is the , nnn , what is the , the error , of the data phd c: this is the , first idea and the second is try to , to use some ideas , similar to the linear discriminant analysis . , similar , because the idea is to study what is the contribution of , each parameter to the process of classify correctly the different parameters . phd c: , the classifier is nnn by the moment is , similar , nnn , that the classifier used , in a quantifier vectorial quantifier is , used to , some distance to put , a vector , in a class different . phd c: is ? w with a model , is only to cluster using a , @ or a similarity . phd c: a another possibility it to use a netw a neural network . but what 's the p what is my idea ? what 's the problem i see in if you use the neural network ? if w when this , mmm , cluster , clustering algorithm to can test , to can observe what happened you ca n't , put up with your hand in the different parameter , phd c: but if you use a neural net is a good idea , but you what happened in the interior of the neural net . professor d: , actually , you can do sensitivity analyses which show you what the importance of the different parce pieces of the input are . it 's hard to w what you it 's hard to tell on a neural net is what 's going on internally . but it 's actually not that hard to analyse it and figure out the effects of different inputs , especially if they 're all normalized . , but professor d: , then a decision tree is really good , but here he 's not like he has one , a bunch of very distinct variables , like pitch and this he 's talking about , like , a all these cepstral coefficients , and , in which case a any reasonable classifier is gon na be a mess , and it 's gon na be hard to figure out what professor d: , the other thing that one , this is , a good thing to do , to look at these things at least see what i 'd let me tell you what i would do . i would take just a few features . instead of taking all the mfcc 's , or all the plp 's or whatever , i would just take a couple . ok ? like like c - one , c - two , something like that , so that you can visualize it . and look at these different examples and look at scatter plots . ok , so before you do build up any fancy classifiers , just take a look in two dimensions , at how these things are split apart . that will give you a lot of insight of what is likely to be a useful feature when you put it into a more complicated classifier . and the second thing is , once you actually get to the point of building these classifiers , @ what this lacks so far is the temporal properties . so if you 're just looking at a frame and a time , you anything about , the structure of it over time , and so you may wanna build @ build a markov model of some sort , or else have features that really are based on some bigger chunk of time . professor d: but this is a good place to start . but do n't anyway , this is my suggestion , is do n't just , throw in twenty features at it , the deltas , and the delta del and all that into some classifier , even if it 's k - nearest - neighbors , you still wo n't know what it 's doing , even it 's , to it 's to have a better feeling for what it 's look at som some picture that shows you , " here 's these things , are offer some separation . " and , in lpc , the thing to particularly look at is , is something like , the residual postdoc e: can i ask ? it strikes me that there 's another piece of information , that might be useful and that 's simply the transition . so , w if you go from a transition of silence to overlap versus a transition from silence to speech , there 's gon na be a b a big informative area there , it seems to me . phd c: , because . i . but i is my own vision , of the project . phd c: i the meeting recorder project , for me , has , two , w has several parts , several p objective , because it 's a great project . but , at the first , in the acoustic , parts of the project , you we have two main objective . one one of these is to detect the change , the acoustic change . and for that , if you do n't use , , a speech recognizer , broad class , or not broad class to try to label the different frames , the ike criterion or bic criterion will be enough to detect the change . and probably . i would like to t prove . , probably . when you have , s the transition of speech or silence to overlap zone , this criterion is enough with probably with , this , the more use used normal , regular parameter mf - mfcc . you have to find you can find the mark . you can find the nnn , the acoustic change . but i understand that you your objective is to classify , to know that zone not is only a new zone in the file , that you have , but you have to know that this is overlap zone . because in the future you will try to process that zone with a non - regular speech recognizer model , i suppose . you will pretend to process the overlapping z zone with another algorithm because it 's very difficult to obtain the transcription from using a regular , normal speech recognizer . that , i is the idea . and so the , nnn the { nonvocalsound } the system will have two models . phd c: a model to detect more acc the mor most accurately possible that is p , will be possible the , the mark , the change and another model will @ or several models , to try s but several model robust models , sample models to try to classify the difference class . grad b: i ' m , i did n't understand you what you said . what what model ? phd c: , the classifiers of the n to detect the different class to the different zones before try to recognize , with to transcribe , with a speech recognizer . and my idea is to use , a neural net phd c: with the information we obtain from this study of the parameter with the selected parameter to try to put the class of each frame . for the difference zone phd c: you , have obtained in the first , step with the , bic , criterion compare model and you i do n't - u professor d: because what we had before for , speaker change detection did not include these overlaps . so the first thing is for you to build up something that will detect the overlaps . so again , the first thing to do to detect the overlaps is to look at these , in professor d: , i again , the things you ' ve written up there are way too big . ok ? if you 're talking about , say , twelfth - order mfcc 's like that it 's just way too much . you wo n't be able to look at it . all you 'll be able to do is put it into a classifier and see how it does . whereas if you have things if you pick one or two dimensional things , or three of you have some very fancy display , and look at how the different classes separate themselves out , you 'll have much more insight about what 's going on . professor d: , you 'll get a feeling for what 's happening , so if you look at suppose you look at first and second - order cepstral coefficients for some one of these kinds of things and you find that the first - order is much more effective than the second , and then you look at the third and there 's not and not too much there , you may just take first and second - order cepstral coefficients , right ? and with lpc , lpc per se is n't gon na tell you much more than the other , maybe . , and on the other hand , the lpc residual , the energy in the lpc residual , will say how , the low - order lpc model 's fitting it , which should be pretty poorly for two or more people speaking at the same time , and it should be pretty , for w for one . and so i i again , if you take a few of these things that are prob promising features and look at them in pairs , you 'll have much more of a sense of " ok , i now have , doing a bunch of these analyses , i now have ten likely candidates . " and then you can do decision trees or whatever to see how they combine . phd c: but , i it is the first way to do that and i would like to , your opinion . all this study in the f in the first moment , i w i will pretend to do with equalizes speech . the the equalizes speech , the mixes of speech . phd c: , why ? because the spectral distortion is more a lot clearer , very much clearer if we compare with the pda . pda speech file is it will be difficult . i phd c: fff ! because the n the noise to sp the signal - to - noise relation is low . phd c: i i that the result of the study with this speech , the mix speech will work exactly with the pda files . grad b: it would be interesting in itself to see . , that would be an interesting result . phd c: what , what is the effect of the low ' signal to noise relation , with professor d: n u we , i think it 's not a it 's not unreasonable . it makes sense to start with the simpler signal because if you have features which do n't are n't even helpful in the high signal - to - noise ratio , then there 's no point in putting them into the low signal ratio , one would think , anyway . and so , if you can get @ again , my prescription would be that you would , with a mixed signal , you would take a collection of possible , features look at them , look at how these different classes that you ' ve marked , separate themselves , and then collect , in pairs , and then collect ten of them , and then proceed with a bigger classifier . and then if you can get that to work , then you go to the other signal . and then , and , they wo n't work as , but how m , how much and then you can re - optimize , and so on . grad b: . but it would be interesting to try a couple with both . because it would be interesting to see if some features work with close mixed , and and do n't professor d: that 's , the it it 's true that it also , it could be useful to do this exploratory analysis where you 're looking at scatter plots and so on in both cases . phd c: but what is the relation of the performance when you use the , speech file the pda speech files . , i . but it will be important . because people , different groups has experience with this problem . is is not easy to solve , because if you i have seen the speech file from pda , and s some parts is very difficult because you do n't see the spectrum the spectrogram . phd c: is very difficult to apply , a parameter to detect change when you do n't see . professor d: , that 's another reason why very simple features , things like energy , and things like harmonicity , and residual energy are , are better to use than very complex ones because they 'll be more reliable . phd a: , i maybe this is a dumb question , but w it would be easier if you used a pda phd a: because ca n't you , could n't you like use beam - forming to detect speaker overlaps ? professor d: that , if we made use of the fact that there are two microphones , you do have some location information . which we do n't have with the one and so that 's professor d: , you wanna know whether you can do it with one , because it 's not necessarily true that every device that you 're trying to do this with will have two . professor d: , if , on the other hand , we show that there 's a huge advantage with two , then that could be a real point . but , we do n't n even know yet what the effect of detecting having the ability to detect overlaps is . , maybe it does n't matter too much . so , this is all pretty early stages . postdoc e: there there is a complication though , and that is if a person turns their back to the pda , then some of the positional information goes away ? postdoc e: and then , and if they 're on the access on the axis of it , that was the other thing i was thinking . he you mentioned this last time , that if you 're straight down the midline , then the r the left - right 's gon na be different , postdoc e: and in his case , he 's closer to it anyway . it seems to me that it 's not a p , it 's this the topograph the topology of it is a little bit complicated . phd c: i because the distance between the two microph , microphone , in the pda is very near . but it 's from my opinion , it 's an interesting idea to try to study the binaural problem , with information , because i found difference between the speech from each micro , in the pda . postdoc e: , i know i n i know that 's a very important cue . but i ' m just saying that the way we 're seated around a table , is not the same with respect to each person with respect to the pda , postdoc e: so we 're gon na have a lot of differences with ref respect to the speaker . professor d: that 's so so i @ the issue is , " is there a clean signal coming from only one direction ? " if it 's not coming from just one direction , if it if th if there 's a broader pattern , it means that it 's more likely there 's multiple people speaking , wherever they are . professor d: is it a is it , is there a narrow is there a narrow beam pattern or is it a distributed beam pattern ? so if there 's a distributed beam pattern , then it looks more like it 's , multiple people . wherever you are , even if he moves around . postdoc e: . ok , it just seemed to me that , that this is n't the ideal type of separation . , it 's see the value o professor d: , ideal would be to have the wall filled with them , but but just having two mikes if you looked at that thing on dan 's page , it was when when there were two people speaking , and it looked really different . phd a: did - , b i ' m not what dan 's page is that you mean . he was looking at the two professor d: you take the signal from the two microphones and you cros and you cross - correlate them with different lags . professor d: so when one person is speaking , then wherever they happen to be at the point when they 're speaking , then there 's a pretty big maximum right around that point in the l in the lag . professor d: so if at whatever angle you are , at some lag corresponding to the time difference between the two there , you get this boost in the cross - correlation value function . postdoc e: , let me ask you , if both people were over there , it would be less effective than if one was there and one was across , catty - corner ? phd a: if i was if i was here and morgan was there and we were both talking , it would n't work . postdoc e: next next one over n over on this side of the p pda . good example , the same one i ' m asking . postdoc e: versus you versus , and we 're catty - corner across the table , and i ' m farther away from this one and you 're farther away from that one . grad b: or or even if , like , if people were sitting right across from each other , you could n't tell the difference either . postdoc e: it seems like that would be pretty strong . across the same axis , you do n't have as much to differentiate . professor d: , we d , we do n't have a third dimension there . , so it 's postdoc e: and so my point was just that it 's gon na be differentially varia valuable . , it 's not to say , i certainly think it 's extremely val and we humans n depend on , these binaural cues . professor d: but it 's almost but it 's almost a what you 're talking about i there 's two things . professor d: there 's a sensitivity issue , and then there 's a pathological error issue . so th the one where someone is just right directly in line is a pathological error . professor d: if someone just happens to be sitting right there then we wo n't get good information from it . postdoc e: and i and if there so it and if it 's the two of you guys on the same side professor d: , if they 're close , it 's just a question of the sensitivity . so if the sensitivity is good enough and we just do n't have enough , experience with it to know how postdoc e: i ' m not trying to argue against using it , by any means . wanted to point out that weakness , that it 's topo topologically impossible to get it perfect for everybody . grad b: and dan is still working on it . so . he actually he wrote me about it a little bit , so . professor d: , the other thing you can do , if , i we 're assuming that it would be a big deal just to get somebody convince somebody to put two microphones in the pda . but if you h put a third in , you could put in the other axis . and then you 're , then you could cover professor d: @ but - but that 's , we can we 'll be all of this is there for us to study . professor d: but but , one of the at least one of the things i was hoping to get at with this is what can we do with what we think would be the normal situation if some people get together and one of them has a pda . professor d: right . , that 's the constraint of one question that both adam and i were interested in . but if you can instrument a room , this is really minor league compared with what some people are doing , right ? some people at , at brown and at cape , professor d: they both have these , big arrays on the wall . and , if you could do that , you ' ve got microphones all over the place grad b: , i saw one that was like a hundred microphones , a ten by ten array . phd a: and you could in a noisy room , they could have all kinds of noises and you can zoom right in on somebody . grad b: it was all in software and they and you could pick out an individual beam and listen to it . professor d: but , the reason why i have n't focused on that as the fir my first concern is because , i ' m interested in what happens for people , random people out in some random place where they 're p having an impromptu discussion . and you ca n't just always go , " , let 's go to this heavily instrumented room that we spent tens of thousands of dollars to se to set up " . phd a: no , what you need to do is you 'd have a little fabric thing that you unroll and hang on a wall . it has all these mikes and it has a plug - in jack to the pda . professor d: the other thing actually , that gets at this a little bit of something else i 'd like to do , is what happens if you have two p d and they communicate with each other ? and then , they 're in random positions , the likelihood that , there would n't be any l likely to be any nulls , if you even had two . if you had three or four it 's . grad b: though all sorts of interesting things you can do with that , not only can you do microphone arrays , but you can do all sorts of multi - band as . so it 's it would be neat . phd a: i still like my rug on the wall idea , so if anybody patents that , then grad b: in terms of the research th research , it 's really it 's whatever the person who is doing the research wants to do . grad b: so if jose is interested in that , that 's great . but if he 's not , that 's great too . professor d: , i i i would actually like us to wind it down , see if we can still get to the end of the , birthdays thing there . grad b: , i had a couple things that i did wanna bring out . one is , do we need to sign new these again ? postdoc e: , it 's slightly different . so i would say it would be a good idea . professor d: , this morning we did n't sign anything cuz we said that if anybody had signed it already , we did n't have to . grad b: , i should ' ve checked with jane first , but the ch the form has changed . grad b: so we may wanna have everyone sign the new form . , i had some things i wanted to talk about with the thresholding i ' m doing . grad b: but , if we 're in a hurry , we can put that off . and then also anonymity , how we want to anonymize the data . postdoc e: , should i have some results to present , but i we wo n't have time to do that this time . but it seems like the anonymization is , is also something that we might wanna discuss in greater length . postdoc e: if if we 're about to wind down , what i would prefer is that we , delay the anonymization thing till next week , and i would like to present the results that i have on the overlaps . professor d: , why do n't we , so @ ok . @ it sounds like u , there were a couple technical things people would like to talk about . why do n't we just take a couple minutes to briefly do them , and then and then we postdoc e: i 'd , i 'd prefer to have more time for my results . e could i do that next week maybe ? postdoc e: ok , that 's what i ' m asking . and the anonymization , if y if you want to proceed with that now , think that 's a discussion which also n really deserves a lo a , more that just a minute . postdoc e: i really do think that , because you raised a couple of possibilities yourself , you and i have discussed it previously , and there are different ways that people approach it , e and we should grad b: alright . we 're we 're just we 're getting enough data now that i 'd like to do it now , before i get overwhelmed with once we decide how to do it going and dealing with it . postdoc e: it 's just . ok . i 'll give you the short version , but i do think it 's an issue that we ca n't resolve in five minutes . ok , so the short thing is , we have , tape recording , digitized recor recordings . those we wo n't be able to change . if someone says " hey , roger so - and - so " . so that 's gon na stay that person 's name . now , in terms of like the transcript , the question becomes what symbol are you gon na put in there for everybody 's name , and whether you 're gon na put it in the text where he says " hey roger " or are we gon na put that person 's anonymized name in instead ? grad b: no , because then that would give you a mapping , and you do n't wanna have a mapping . postdoc e: ok , so first decision is , we 're gon na anonymize the same name for the speaker identifier and also in the text whenever the speaker 's name is mentioned . grad b: because that would give you a mapping between the speaker 's real name and the tag we 're using , and we do n't want postdoc e: i do n't think you understood what i said . so , so in within the context of an utterance , someone says " so , roger , what do you think ? " then , it seems to me that , maybe i it seems to me that if you change the name , the transcript 's gon na disagree with the audio , and you wo n't be able to use that . grad b: we do n't we wanna we ha we want the transcript to be " roger " . because if we made the transcript be the tag that we 're using for roger , someone who had the transcript and the audio would then have a mapping between the anonymized name and the real name , and we wanna avoid that . postdoc e: ok , but then there 's this issue of if we 're gon na use this for a discourse type of thing , then and , liz was mentioning in a previous meeting about gaze direction and who 's the addressee and all , then to have " roger " be the thing in the utterance and then actually have the speaker identifier who was " roger " be " frank " , that 's going to be really confusing and make it useless for discourse analysis . postdoc e: now , if you want to , , in some cases , i know that susan ervin - tripp in some of hers , actually did do , a filter of the s signal where the person 's name was mentioned , except postdoc e: and and i cer and i so , the question then becomes one level back . , how important is it for a person to be identified by first name versus full name ? , on the one hand , it 's not a full identity , we 're taking all these precautions , and they 'll be taking precautions , which are probably even the more important ones , to they 'll be reviewing the transcripts , to see if there 's something they do n't like ok . so , maybe that 's enough protection . on the other hand , this is a small pool , and people who say things about topic x e who are researchers and - known in the field , they 'll be identifiable and simply from the first name . however , taking one step further back , they 'd be identifiable anyway , even if we changed all the names . postdoc e: so , is it really , ? now , in terms of like so i did some results , which i 'll report on n next time , which do mention individual speakers by name . now , there , the human subjects committee is very precise . you do n't wanna mention subjects by name in published reports . now , it would be very possible for me to take those data put them in a study , and just change everybody 's name for the purpose of the publication . and someone who looked professor d: . , , t it does n't , i ' m not knowledgeable about this , but it certainly does n't bother me to have someone 's first name in the transcript . postdoc e: , and in the form that they sign , it does say " your first name may arise in the course of the meetings " . professor d: and so so again , th the issue is if you 're tracking discourse things , if someone says , " frank said this " and then you wanna connect it to something later , you ' ve got ta have this part where that 's " frank colon " . postdoc e: , and , even more i , immediate than that just being able to , it just seems like to track from one utterance to the next utterance who 's speaking and who 's speaking to whom , cuz that can be important . s i , " you raised the point , so - and - so " , it 's be to be able to know who " you " was . postdoc e: and ac and actually you remember furthermore , you remember last time we had this discussion of how , i was avoiding mentioning people 's names , postdoc e: and it was and we made the decision that was artificial . , if we 're going to step in after the fact and change people 's names in the transcript , we ' ve done something one step worse . grad b: , i would sug i do n't wanna change the names in the transcript , but that 's because i ' m focused so much on the acoustics instead of on the discourse , and so that 's a really good point . professor d: l let me just back up this to make a brief comment about the , what we 're covering in the meeting . i realize when you 're doing this that , i did n't realize that you had a bunch of things that you wanted to talk about . , and so i was proceeding some somewhat at random , frankly . so what would be helpful would be , i and i 'll mention this to liz and andreas too , that , before the meeting if anybody could send me , any , agenda items that they were interested in and i 'll take the role of organizing them , into the agenda , but i 'd be very pleased to have everyone else completely make up the agenda . i ' ve no desire to make it up , but if no one 's told me things , then i ' m just proceeding from my guesses , and i ye , i ' m it ended up with your out your time to , i ' m just always asking jose what he 's doing , and so it 's there 's , there 's other things going on . postdoc e: , it 's not a problem . not a problem . i just could n't do it in two minutes . grad b: how will we how would the person who 's doing the transcript even know who they 're talking about ? do what i ' m saying ? grad b: , so if i ' m saying in a meeting , " and bob , wanted to do so - and - so " , grad b: if you 're doing , @ they 're just gon na write " bob " . and so . if you 're if you 're doing discourse analysis , postdoc e: , i ' m betting we 're gon na have huge chunks that are just un untranscribable by them . professor d: , they 're gon na say speaker - one , or speaker - two or speaker i grad b: , the current one they do n't do speaker identity . because in naturallyspeaking , or , excuse me , in viavoice , it 's only one person . and so in their current conventions there are no multiple speaker conventions . postdoc e: . that my understanding from yen is it yen - ching ? is that how you pronounce her name ? postdoc e: was that , they will that they will adopt the part of the conventions that we discussed , where they put speaker identifier down . but , h they wo n't know these people , so it 's , they 'll adopt some convention but we have n't specified to them so they 'll do something like speaker - one , speaker - two , is what i bet , but i ' m betting there 'll be huge variations in the accuracy of their labeling the speakers . we 'll have to review the transcripts in any case . professor d: and it and it may very be , since they 're not going to sit there and worry ab about , it being the same speaker , they may very go the first se the first time it changes to another speaker , that 'll be speaker - two . and the next time it 'll be speaker - three even if it 's actually speaker - one . postdoc e: and that 's ok . yes , i was thinking , the temp the time values of when it changes . grad b: the p it 's a good point , " which what do you do for discourse tracking ? " phd c: because y you to know , you do n't need to i what is the iden identification of the speakers . you only want to know professor d: if if someone says , " what is jose doing ? " and then jose says something , you need to know that was jose responding . postdoc e: unless we adopt a different set of norms which is to not i d to make a point of not identifying people by name , which then leads you to be more contextually ex explicit . postdoc e: , people are very flexible . ? , so when we did this las last week , i felt that , now , andreas may , @ , he i sometimes people think of something else at the same time and they miss a sentence , and because he missed something , then he missed the r the initial introduction of who we were talking about , and was unable to do the tracking . but i felt like most of us were doing the tracking and knew who we were talking about and we just were n't mentioning the name . so , people are really flexible . phd a: but , like , at the beginning of this meeting or , you said , or s liz , said something about , " is mari gon na use the equipment ? " , how would you say that ? phd a: it would be really hard if we made a policy where we did n't say names , plus we 'd have to tell everybody else . grad b: , darn ! , what i was gon na say is that the other option is that we could bleep out the names . phd a: the i , my own two cents worth is that you do n't do anything about what 's in the recordings , you only anonymize to the extent you can , the speakers have signed the forms and all . grad b: , but that as i said , that works great for the acoustics , but it hurts you a lot for trying to do discourse . grad b: because you do n't have a map of who 's talking versus their name that they 're being referred to . professor d: ok , so suppose someone says , " i if i really heard what , what jose said . " and then , jose responds . and part of your learning about the dialogue is jose responding to it . but it does n't say " jose " , it says " speaker - five " . so u phd a: , i see , you wanna associated the word " jose " in the dialogue with the fact that then he responded . professor d: and so , if we pass out the data to someone else , and it says " speaker - five " there , we also have to pass them this little guide that says that speaker - five is jose , professor d: and if were gon na do that we might as give them " jose " say it was " jose " . postdoc e: now , that we have these two phases in the data , which is the one which is o our use , university of washington 's use , ibm , sri . and within that , it may be that it 's sufficient to not change the to not incorporate anonymization yet , but always , always in the publications we have to . and also , when we take it that next step and distribute it to the world , we have to . but i but i don that 's a long way from now and it 's a matter of between now and then of d of deciding how postdoc e: i it , it may be s that we 'll need to do something like actually x out that part of the audio , and just put in brackets " speaker - one " . grad b: , what we could do also is have more than one version of release . one that 's public and one that requires licensing . and so the licensed one would w we could it would be a sticky limitation . , like , we can talk about that later . postdoc e: that 's risky . that the public should be the same . that when we do that world release , it should be the same . professor d: that we have a need to have a consistent licensing policy of some sort , and phd a: , one thing to take into consideration is w are there any , the people who are funding this work , they want this work to get out and be useful for discourse . if we all of a sudden do this and then release it to the public and it 's not longer useful for discourse , grad b: , depending on how much editing we do , you might be able to still have it useful . because for discourse you do n't need the audio . so you could bleep out the names in the audio . and use the anonymized one through the transcript . professor d: she no , but she 's saying , from the argument before , she wants to be able to say if someone said " jose " in their thing , and then connect to so to what he said later , then you need it . grad b: but in the transcript , you could say , everywhere they said " jose " that you could replace it with " speaker - seven " . grad b: and then it would n't meet match the audio anymore . but it would be still useful for the professor d: and th and the other thing is if liz were here , what she might say is that she wants to look if things that cut across between the audio and the dialogue , postdoc e: - . we have to think about w @ how . that this ca n't be decided today . postdoc e: but it 's g but it was good to introduce the thing and we can do it next time . grad b: i did n't think when i wrote you that email i was n't thinking it was a big can of worms , but i it is . postdoc e: it discourse , also i wanted to make the point that discourse is gon na be more than just looking at a transcript . postdoc e: it 's gon na be looking at a t , and prosod prosodic is involved , and that means you 're going to be listening to the audio , and then you come directly into this confronting this problem . phd a: maybe we should just not allow anybody to do research on discourse , and then , we would n't have to worry about it . professor d: , maybe we should only have meetings between people who one another and who are also amnesiacs who their own name . postdoc e: we could have little labels . i wanna introduce my reservoir dogs solution again , which is everyone has like " mister white " , " mister pink " , " mister blue " . grad b: did you read the paper a few years ago where they were reversing the syllables ? they were di they had the utterances . and they would extract out the syllables and they would play them backwards . phd a: but so , the syllables were in the same order , with respect to each other , but the acous grad b: everything was in the same order , but they were the individual syll syllables were played backwards . and you could listen to it , and it would sound the same . grad b: people had no difficulty in interpreting it . so what we need is something that 's the reverse , that a speech recognizer works exactly the same on it but people ca n't understand it . professor d: , that 's there 's an easy way to do that . jus - jus just play it all backwards . phd a: it would be fun sometime to read them with different intonations . like as if you were talking like , " nine eight six eight seven ? " postdoc e: , in the one i transcribed , i did find a couple instances i found one instance of contrastive stress , where it was like the string had a li so it was like " nine eight two four , nine two four " . postdoc e: , they differed . , at that session i did feel like they did it more as sentences . but , sometimes people do it as phone numbers . , i ' ve i am interested in and sometimes , i s and i never know . when i do it , i ask myself what i ' m doing each time . phd a: , . , i was thinking that it must get boring for the people who are gon na have to transcribe this postdoc e: i like your question intonation . that 's very funny . i have n't heard that one . grad b: we have the transcript . we have the actual numbers they 're reading , so we 're not necessarily depending on that . ok , i ' m gon na go off .
the berkeley meeting recorder group discussed research aims and corresponding concerns for future data collection. it was agreed that a substantial amount of meeting data is required from different domains , and comprising several speakers , to perform the types of discourse and acoustic analyses desired. ongoing efforts by speaker mn005 to automatically detect regions of speaker overlap were considered. it was suggested that speaker mn005 focus on a small set of acoustic parameters , e.g . energy and harmonics-related features , to distinguish regions of overlap from those containing the speech of just one speaker. disk space issues were discussed. and , finally , the problem of speaker anonymization was explored. recordings must be of existing meetings that are conducted in english. participants should ideally consist of professors and doctoral students , but no undergraduate students , who are willing to record their meetings at icsi. the meeting recorder corpus should comprise data from a large number of speakers representing different domains. attempts should also be made to optimize the speaker population for generating good language models. speaker me011 will pursue volunteers from the haas business school to record their weekly meetings at icsi. a tentative decision was made to offer participants a recorded version of their meeting on a cd rom once the transcript screening phase is complete for that meeting. non-native speakers with a low proficiency in english are problematic for language modelling. the prospect of creating another recording setup requires the elimination certain more complicated dimensions of the existing setup , e.g . the use of close-talking and far-field microphones. speaker anonymization poses problems for the transcription proccess , and also discourse analysis , as it makes it more difficult to track who is speaking and to whom a particular utterance is being addressed. as the current version of transcriptions does not include speaker identification labels , no multiple speaker transcription conventions are in use. research aims and corresponding concerns for future data collection were discussed. a student researcher will be working with speaker fe016 to investigate different strategies for automatically summarizing meetings , and identifying discussional 'hotspots'. efforts by speaker mn005 are ongoing to detect regions of speaker overlap in the signal. a total of 230 regions of overlapping speech have been manually transcribed for a subset of meeting data. supervised clustering and neural networks are being considered as means for classifying overlap. it was suggested that speaker mn005 focus on a small set of acoustic parameters , e.g . energy and harmonics-related features , to use the mixed signal to distinguish regions of overlap from those containing the speech of just one speaker. future work may also involve focussing on additional signals , and using a markov model to analyze acoustic parameters over larger time frames. beam-forming was suggested as an alternate method of detecting overlapping speech. efforts are ongoing to select an optimal method for anonymizing speakers. more disk space is gradually being made available for the storage of new meeting recorder data.
###dialogue: grad b: so , i what it is . but all i know is that it seems like every time i am up here after a meeting , and i start it , it works fine . and if i ' m up here and i start it and we 're all sitting here waiting to have a meeting , it gives me that error message and i have not yet sat down with been able to get that error message in a point where sit down and find out where it 's occurring in the code . postdoc e: was it a pause , or ? ok . was it on " pause " ? professor d: so so the , the new procedural change that just got suggested , which is a good idea is that , we do the digit recordings at the end . and that way , if we 're recording somebody else 's meeting , and a number of the participants have to run off to some other meeting and do n't have the time , then they can run off . it 'll mean we 'll get somewhat fewer , sets of digits , but , that way we 'll cut into people 's time , if someone 's on strict time , less . so , i th i think we should start doing that . , so , let 's see , we were having a discussion the other day , maybe we should bring that up , about , the nature of the data that we are collecting . @ that , we should have a fair amount of data that is , collected for the same meeting , so that we can , i . wh - what were some of the points again about that ? is it phd f: , ok , i 'll back up . , at the previous at last week 's meeting , this meeting i was griping about wanting to get more data and i talked about this with jane and adam , and was thinking of this mostly just so that we could do research on this data , since we 'll have a new this new student di does wanna work with us , phd f: and he 's already funded part - time , so we 'll only be paying him for half of the normal part - time , phd f: so he 's comes from a signal - processing background , but i liked him a lot cuz he 's very interested in higher level things , like language , and disfluencies and all kinds of eb maybe prosody , phd f: so he 's just getting his feet wet in that . anyway , ok , maybe we should have enough data so that if he starts he 'd be starting in january , next semester that we 'd have , enough data to work with . phd f: but , jane and adam brought up a lot of good points that just posting a note to berkeley people to have them come down here has some problems in that you m you need to make that the speakers are who you want and that the meeting type is what you want , and . so , about that and it 's still possible , but i 'd rather try to get more regular meetings of types that we know about , and hear , then a mish - mosh of a bunch of one - time phd f: , just because it would be very hard to process the data in all senses , both to get the , to figure out what type of meeting it is and to do any higher level work on it , like , i was talking to morgan about things like summarization , or what 's this meeting about . it 's very different if you have a group that 's just giving a report on what they did that week , versus coming to a decision and . so . then i was , talking to morgan about some new proposed work in this area , a separate issue from what the student would be working on where i was thinking of doing some summarization of meetings or trying to find cues in both the utterances and in the utterance patterns , like in numbers of overlaps and amount of speech , raw cues from the interaction that can be measured from the signals and from the diff different microphones that point to hot spots in the meeting , or things where is going on that might be important for someone who did n't attend to listen to . and in that , regard , we definitely w will need it 'd b it 'd be for us to have a bunch of data from a few different domains , or a few different kinds of meetings . so this meeting is one of them , although i ' m not participate if i , i would feel very strange being part of a meeting that you were then analysing later for things like summarization . , and then there are some others that menti that morgan mentioned , like the front - end meeting and maybe a networking group meeting . phd f: but , for anything where you 're trying to get a summarization of some meeting meaning out of the meeting , it would be too hard to have fifty different kinds of meetings where we did n't really have a good grasp on what does it mean to summarize , but rather we should have different meetings by the same group but hopefully that have different summaries . and then we need a couple that of we do n't wanna just have one group because that might be specific to that particular group , but @ three or four different kinds . phd f: see , i ' ve never listened to the data for the front - end meeting . phd f: ok . but maybe that 's enough . so , in general , i was thinking more data but also data where we hold some parameters constant or fairly similar , like a meeting about of people doing a certain work where at least half the participants each time are the same . professor d: now , let l let me just give you the other side to that cuz i ca because i do n't disagree with that , but there is a complimentary piece to it too . , for other kinds of research , particularly the acoustic oriented research , i actually feel the opposite need . i 'd like to have lots of different people . professor d: as many people here a and talking about the thing that you were just talking about it would have too few people from my point of view . i 'd like to have many different speakers . so , i would also very much like us to have a fair amount of really random scattered meetings , of somebody coming down from campus , and , professor d: but if we only get one or two from each group , that still could be useful acoustically just because we 'd have close and distant microphones with different people . postdoc e: can i say about that the issues that adam and i raised were more a matter of advertising so that you get more native speakers . because if you just say an and in particular , my suggestion was to advertise to linguistics grad students because there you 'd have so people who 'd have proficiency enough in english that , it would be useful for purposes . postdoc e: but , i ' ve been i ' ve gathered data from undergrads at on campus and if you just post randomly to undergrads you 'd get such a mixed bag that it would be hard to know how much conversation you 'd have . and and the english you 'd have the language models would be really hard to build professor d: , ok , first place , i do n't think we 'd just want to have random people come down and talk to one another , there should be a meeting that has some goal and point cuz that 's what we 're investigating , phd f: it has to be a pre - existing meeting , like a meeting that would otherwise happen anyway . professor d: so , . so i was thinking more in terms of talking to professors , and , senior , d and , doctoral students who are leading projects and offering to them that they have their hold their meeting down here . professor d: , that 's the first point . the second point is that for some time now , going back through berp that we have had speakers that we ' ve worked with who had non - native accents and i th that postdoc e: , . i ' m not saying accents . u the accent 's not the problem . postdoc e: no , it 's more a matter of , proficiency , e just simply fluency . , i deal with people on campus who sometimes people , undergraduates in computer science , have language skills that make , that their fluency and writing skills are not so strong . postdoc e: but , it 's like when you get into the graduate level , no problem . , i ' m not saying accents . grad b: , that the only thing we should say in the advertisement is that the meeting should be held in english . and and if it 's a pre - existing meeting and it 's held in english , it 's probably ok if a few of the people do n't have , g particularly good english skills . postdoc e: ok , now can i say the other aspect of this from my perspective which is that , there 's this issue , you have a corpus out there , it should be used for multiple things cuz it 's so expensive to put together . postdoc e: and if people want to approach , i so i know e this the idea of computational linguistics and probabilistic grammars and all may not be the focus of this group , but the idea of language models , which are fund generally speaking , t terms of like the amount of benefit per dollar spent or an hour invested in preparing the data , if you have a choice between people who are pr more proficient in { nonvocalsound } , i more fluent , more close to being academic english , then it would seem to me to be a good thing . postdoc e: because otherwise y you do n't have the ability to have , so if you have a bunch of idiolects that 's the worst possible case . if you have people who are using english as a as an interlanguage because they do n't , they ca n't speak in their native languages and but their interlanguage is n't really a match to any existing , language model , this is the worst case scenario . postdoc e: i ' m just thinking that we have to think at a higher level view , could we have a language model , a grammar , that , wo would be a possibility . so y so if you wanted to bring in a model like dan jurafsky 's model , an and do some top - down , it to help th the bottom - up and merge the things or whatever , it seems like , i do n't see that there 's an argument i ' m i what is that why not have the corpus , since it 's so expensive to put together , useful for the widest range of central corp things that people generally use corpora for and which are , used in computational linguistics . that 's that 's my point . which which includes both top - down and bottom - up . professor d: ok , i let 's see what we can get . , it that if we 're aiming at , groups of graduate students and professors and who are talking about things together , and it 's from the berkeley campus , probably most of it will be ok , postdoc e: yes , that 's fine . that 's fine . exactly . and my point in m in my note to liz was that undergrads are an iff iffy population . grad b: , not to mention the fact that i would be hesitant certainly to take anyone under eighteen , probably even an anyone under twenty - one . grad b: what 's that ? , age - ist . the " eighteen " is because of the consent form . phd f: i have a , question . , morgan , you were mentioning that mari may not use the k equipment from ibm if they found something else , cuz there 's a professor d: they 're they 're , they 're d they 're assessing whether they should do that or y do something else , hopefully over the next few weeks . phd f: cuz , one remote possibility is that if we st if we inherited that equipment , if she were n't using it , could we set up a room in the linguistics department ? and and , there may be a lot more or in psych , or in comp wherever , in another building where we could , record people there . we 'd have a better chance grad b: we 'd need a real motivated partner to do that . we 'd need to find someone on campus who was interested in this . phd f: right , but but if there were such a it 's a remote possibility , then , one of us could , go up there and record the meeting rather than bring all of them down here . so it 's just a thought if they end up not using the hardware . professor d: , the other thing that i was hoping to do in the first place was to turn it into some portable thing so you could wheel it around . but . , and grad b: , i know that space is really scarce on at least in cs . , to actually find a room that we could use regularly might actually be very difficult . phd f: the idea is , if they have a meeting room and they can guarantee that the equipment will be safe and , and if one of us is up there once a week to record the meeting phd c: but i you need , another portable thing a another portable equipment to do , more e easier the recording process , out from icsi . and probably . i . , if you want to record , a seminar or a class , in the university , you need it - it would be very difficult to put , a lot of , head phones in different people when you have to record only with , this , d device . grad b: , but if we wanna just record with the tabletop microphones , that 's easy . right ? that 's very easy , professor d: actually , that 's a int that raises an interesting point that came up in our discussion that 's maybe worth repeating . we realized that , when we were talking about this that , ok , there 's these different things that we want to do with it . so , it 's true that we wanna be selective in some ways , the way that you were speaking about with , not having an interlingua and , these other issues . but on the other hand , it 's not necessarily true that we need all of the corpus to satisfy all of it . so , a as per the example that we wanna have a fair amount that 's done with a small n recorded with a small , typ number of types of meetings but we can also have another part that 's , just one or two meetings of each of a range of them and that 's ok too . , i we realized in discussion that the other thing is , what about this business of distant and close microphones ? , we really wanna have a substantial amount recorded this way , that 's why we did it . but what about for th for these issues of summarization , a lot of these higher level things you do n't really need the distant microphone . professor d: you d you do n't ne it does n't you just need some microphone , somewhere . phd f: you can use , but that any data that we spend a lot of effort { nonvocalsound } to collect , each person who 's interested in , we have a cou we have a bunch of different , slants and perspectives on what it 's useful for , they need to be taking charge of making they 're getting enough of the data that they want . and so in my case , there w there is enough data for some kinds of projects and not enough for others . phd f: and so { nonvocalsound } i ' m looking and thinking , " i 'd be glad to walk over and record people and so { nonvocalsound } forth if it 's to help th in my interest . " and other people need to do that for themselves , h or at least discuss it so that we can find some optimal professor d: right . so that but that i ' m raising that cuz it 's relevant exactly for this idea up there that if you think about , " , gee , we have this really complicated setup to do , " maybe you do n't . professor d: maybe if if really all you want is to have a recording that 's good enough to get a , a transcription from later , you just need to grab a tape recorder and go up and make a recording . , we could have a fairly we could just get a dat machine and phd f: , i agree with { nonvocalsound } jane , though , on the other hand that so that might be true , you may say , summarization , that sounds very language oriented . you may say , " , you just do that from transcripts of a radio show . " , you do n't even need the speech signal . but what you what i was thinking is long term what would be neat is to be able to pick up on suppose you just had a distant microphone there and you really wanted to be able to determine this . there 's lots of cues you 're not gon na have . so i do think that long term you should always try to satisfy the greatest number of interests and have this parallel information , which is really what makes this corpus powerful . professor d: but that the i we ca n't really underestimate the difficulty should n't really u underestimate the difficulty of getting a setup like this up . and so , it took quite a while to get that together and to say , " , we 'll just do it up there , " if you 're talking about something simple , where you throw away a lot of these dimensions , then you can do that right away . talking about something that has all of these different facets that we have here , it wo n't happen quickly , it wo n't be easy , and there 's all sorts of issues about th keeping the equipment safe , or else hauling it around , and all sorts of o professor d: the first priority should be to pry to get try to get people to come here . phd f: , i and we can get people to come here , that but the issue is you definitely wanna make that the group you 're getting is the right group so that you do n't waste a lot of your time { nonvocalsound } and the overhead in bringing people down . grad b: , i had a i spoke with some people up at haas business school who volunteered . should i pursue that ? grad b: they they originally they ' ve decided not to do go into speech . so i ' m not whether they 'll still be so willing to volunteer , but i 'll send an email and ask . grad b: i 'll tell them about the free lunch . and they 'll say there 's no such thing . phd f: i 'd love to get people that are not linguists or engineers , cuz these are both weird professor d: , the they make funny sounds . the o the other the other thing is , that we talked about is give to them , burn an extra cd - rom . professor d: and give them so if they want a { nonvocalsound } and audio record of their phd f: , that was he meant , " give them a music cd , " like they g then he said a cd of the of their speech and i it depends of what audience you 're talking to , but , i personally { nonvocalsound } would not want a { nonvocalsound } cd of my meeting , grad b: it 'd be fun . it would just be fun , if nothing else , . it 's a novelty item . professor d: but it als it it also builds up towards the goal . we 're saying , " look , you 're gon na get this . is - is is n't that neat . then you 're gon na go home with it . it 's actually p it 's probably gon na be pretty useless to you , but you 'll ge appreciate , where it 's useful and where it 's useless , and then , we 're gon na move this technology , so it 'll become useful . " so . phd a: what if you could tell them that you 'll give them the transcripts when they come back ? grad b: , . , anyone can have the transcripts . so . we could point that out . postdoc e: i hav i have to raise a little eensy - weensy concern about doing th giving them the cd immediately , because of these issues of , this , where maybe ? grad b: that 's a good point . right , it ca n't be the internal one . postdoc e: , that 's right , say " , i got this cd , and , your honor , i " professor d: so , let 's see . so that was that topic , and then , i another topic would be where are we in the whole disk resources question for grad b: we are slowly getting to the point where we have enough sp room to record meetings . so i did a bunch of archiving , and still doing a bunch of archiving , i ' m in the midst of doing the p - files from , broadcast news . and it took eleven hours to do to copy it . grad b: , it 's abbott . it 's abbott , so it just but it 's a lot of data . professor d: sk - it 's copying from one place on abbott to another place on abbott ? grad b: so i ' m archiving it , and then i ' m gon na delete the files . so that will give us ten gigabytes of free space . grad b: and and and so one that that will be done , like , in about two hours . and so , at that point we 'll be able to record five more meetings . postdoc e: one thing the good news about that is that once it 's archived , it 's pretty quick to get back . grad b: , especially because i ' m generating a clone , also . so . and that takes a while . postdoc e: now , what will is the plan to g to so will be saved , it 's just that you 're relocating it ? , so we 're gon na get more disk space ? or did i ? grad b: no , the these are the p - files from broadcast news , which are regeneratable grad b: , if we really need to , but we had a lot of them . and for the full , hundred forty hour sets . and so they were two gigabytes per file and we had six of them . professor d: w w we are getting more space . we are getting , another disk rack and four thirty - six gigabyte disks . so but that 's not gon na happen instantaneously . grad b: the sun , ha , takes more disks than the andatico one did . the sun rack takes th - one took four and one took six , or maybe it was eight and twelve . whatever it was , fifty percent more . grad b: , what happened is that we bought all our racks and disks from andatico for years , according to dave , and andatico got bought by another company and doubled their prices . and so , we 're looking into other vendors . we by " we " dave . phd a: - . i ' ve been looking at the , aurora data and , first look at it , there were three directories on there that could be moved . one was called aurora , one was spanish , which was carmen 's spanish , and the other one was , spine . phd a: and so , i wrote to dan and he was very concerned that the spine was moving to a non - backed - up disk . so , i realized that , probably not all of that should be moved , just the cd - rom type data , the static data . so i moved that , and then , i asked him to check out and see if it was ok . before i actually deleted the old , but i have n't heard back yet . i told him he could delete it if he wanted to , i have n't checked today to see if he 's deleted it or not . and then carmen 's , i realized that when i had copied all of her to xa , i had copied there that was dynamic data . and so , i had to redo that one and just copy over the static data . and so i need to get with her now and delete the old off the disk . and then i lo have n't done any of the aurora . i have to meet with , stephane to do that . so . professor d: so , but , y you 're figuring you can record another five meetings with the space that you 're clearing up from the broadcast news , but , we have some other disks , some of which you 're using for aurora , but are we g do we have some other space now ? grad b: so , so , we have space on the current disk right now , where meeting recorder is , and that 's probably enough for about four meetings . grad b: no , no , it 's wherever the meeting recorder currently is . it 's di . phd a: ok , i but the i ' m moving from aurora is on the dc disk that we grad b: i do n't remember . th - it 's dc - it 's whatever that one is . grad b: do n't remember , it might be dc . and that has enough for about four more meetings right now . , we were at a hundred percent and then we dropped down to eighty - six for reasons i do n't understand . , someone deleted something somewhere . and so we have some room again . and then with broadcast news , that 's five or six more meetings , so , we have a couple weeks . , so , i think we 're ok , until we get the new disk . phd a: so should , one question i had for you was , we need we sh probably should move the aurora an and all that other off of the meeting recorder disk . is there another backed - up disk that of that would ? grad b: we should put it onto the broadcast news one . that 's probably the best thing to do . and that way we consolidate meeting recorder onto one disk rather than spreading them out . grad b: but , so we could ' jus just do that at the end of today , once the archive is complete , and i ' ve verified it . cuz that 'll give us plenty of disk . professor d: , ok , @ so , then i th the last thing i 'd had on my agenda was just to hear an update on what jose has been doing , phd c: i have , the result of my work during the last days . for your information because i read . , and the last , days , i work , in my house , in a lot of ways and thinking , reading , different things about the meeting recording project . and i have , some ideas . , this information is very useful . because you have the distribution , now . phd c: but for me , is interesting because , here 's i is the demonstration of the overlap , problem . phd c: it 's a real problem , a frequently problem , because you have overlapping zones , all the time . phd c: , by a moment i have , nnn , the , n i did a mark of all the overlapped zones in the meeting recording , with , a exact mark . phd c: heh ? that 's , yet b , by b by hand because , " why . " phd c: my my idea is to work i do i do n't @ i , if , it will be possible because i have n't a lot , enough time to work . , only just , six months , as , but , my idea is , is very interesting to work in the line of , automatic segmenter . but , in my opinion , we need , a reference session to t to evaluate the tool . grad b: yes , . and so are you planning to do that or have you done that already ? phd c: i plan , but , the idea is the following . now , i need ehm , to detect all the overlapping zones exactly . i will , talk about , in the blackboard about the my ideas . phd c: , this information , with , exactly time marks , for the overlapping zones overlapping zone , and , a speaker a pure speech , speaker zone . , zones of speech of , one speaker without any , noise , any acoustic event that , w , is not , speech , real speech . and , i need t true , silence for that , because my idea is to study the nnn the set of parameters , what , are more m more discriminant to , classify . the overlapping zones in cooperation with the speech zones . the idea is to use , i ' m not to yet , but my idea is to use a cluster algorithm or , nnn , a person strong in neural net algorithm to study what is the , the property of the different feat feature , to classify speech and overlapping speech . phd c: and my idea is , it would be interesting to have , a control set . and my control set , will be the , silence without , any noise . postdoc e: . that 's interesting . this is like a ground level , with it 's not it 's not total silence . phd c: , noise , claps , tape clips , the difference , event , which , has , a hard effect of distorti spectral distortion in the speech . phd c: , i have mark in that not in all the file , only , nnn , mmm , i have , ehm i do n't remind what is the quantity , but , i have marked enough speech on over and all the overlapping zones . i have , two hundred and thirty , more or less , overlapping zones , and is similar to this information , phd c: because with the program , i cross the information of , of jane with , my segmentation by hand . and is , mor more similar . phd c: and the idea is , i will use , i want my idea is , to { nonvocalsound } to classify . phd c: i need , the exact , mark of the different , zones because i want to put , for , each frame a label indicating . it 's a sup supervised and , hierarchical clustering process . i put , for each frame { nonvocalsound } a label indicating what is th the type , what is the class , which it belong . , the class you will { nonvocalsound } overlapping speech " overlapping " is a class , " speech " { nonvocalsound } @ the class that 's phd c: because , my idea is , in the first session , i need , to be that the information , that , i will cluster , is right . because , if not , i will , return to the speech file to analyze , what is the problems , phd c: . and i 'd prefer i would prefer , the to have , this labeled automatically , but , fro th i need truth . postdoc e: i ' ve got ta ask you . so , the difference between the top two , i so so i start at the bottom , so " silence " is clear . by " speech " do you mean speech by one sp by one person only ? postdoc e: so this is un ok , and then the top includes people speaking at the same time , or a speaker and a breath overlapping , someone else 's breath , or clicking , overlapping with speech so , that 's all those possibilities in the top one . phd c: one , two , three . but no , by th by the moment n . , in the first moment , because , i have information , of the overlapping zones , information about if the , overlapping zone is , from a speech , clear speech , from a one to a two speaker , or three speaker , or is the zone where the breath of a speaker , overlaps , onto , a speech , another , especially speech . postdoc e: so it 's basi it 's speech wi som with something overlapping , which could be speech but does n't need to be . professor d: no , but there 's but , she 's saying " where do you in these three categories , where do you put the instances in which there is one person speaking and other sounds which are not speech ? " phd c: , he here i put speech from , one speaker without , any events more . professor d: right , so where do you put speech from one speaker that does have a nonspeech event at the same time ? phd c: for for the by the @ no , @ because i want to limit the nnn , the study . grad b: , so that 's what he was saying before , is that he excluded those . phd c: , be why ? bmr006cdialogueact826 203052 203084 c phd qw -1 0 why ? bmr006cdialogueact827 203084 203153 c phd qw -1 0 what 's the reason ? because i it 's the first study . the first professor d: , no , it 's a perfectly sensible way to go . we just wondered trying to understand what you were doing . postdoc e: we 're just cuz you ' ve talked about other overlapping events in the past . so , this is a subset . phd c: but , the first idea because , i what hap what will happen with the study . phd c: it 's the control set . ok ? it 's pure si pure silence with the machine on the roof . professor d: what you w what you m what you mean is that it 's nonspeech segments that do n't have impulsive noises . professor d: right ? cuz you 're calling what you 're calling " event " is somebody coughing or clicking , or rustling paper , or hitting something , which are impulsive noises . but steady - state noises are part of the background . which , are being , included in that . phd c: h here yet , yet i think , there are that some noises that , do n't wanted to be in that , in that control set . phd c: but i prefer , i prefer at the first , the silence with , this the of noise . professor d: right , it 's " background " might be a better word than " silence " . it 's just that the background acoustic phd c: is is only and , with this information the idea is , nnn , i have a label for each , frame and , with a cluster algorithm i and phd c: and i am going to prepare a test bed , , a set of feature structure , models . and my idea is phd c: so on because i have a pitch extractor yet . i have to test , but i phd c: i ha i have prepare . is a modified version of a pitch tracker , from , standar - stanford university in stanford ? no . from , , cambridge university . phd c: , i do n't remember what is the name of the author , because i have several i have , , library tools , from , festival and of from edinburgh , from cambridge , and from our department . phd c: and i have to because , in general the pitch tracker , does n't work very and grad b: bad . but , as a feature , it might be ok . so , we . phd c: this this is and th the idea is to , to obtain , , diff , different , no , a great number of fec , , twenty - five , thirty parameters , for each one . and in a first , nnn , step in the investi in the research in , my idea is try to , to prove , what is the performance of the difference parameter , to classify the different , what is the front - end approach to classify , the different , frames of each class and what is the , nnn , what is the , the error , of the data phd c: this is the , first idea and the second is try to , to use some ideas , similar to the linear discriminant analysis . , similar , because the idea is to study what is the contribution of , each parameter to the process of classify correctly the different parameters . phd c: , the classifier is nnn by the moment is , similar , nnn , that the classifier used , in a quantifier vectorial quantifier is , used to , some distance to put , a vector , in a class different . phd c: is ? w with a model , is only to cluster using a , @ or a similarity . phd c: a another possibility it to use a netw a neural network . but what 's the p what is my idea ? what 's the problem i see in if you use the neural network ? if w when this , mmm , cluster , clustering algorithm to can test , to can observe what happened you ca n't , put up with your hand in the different parameter , phd c: but if you use a neural net is a good idea , but you what happened in the interior of the neural net . professor d: , actually , you can do sensitivity analyses which show you what the importance of the different parce pieces of the input are . it 's hard to w what you it 's hard to tell on a neural net is what 's going on internally . but it 's actually not that hard to analyse it and figure out the effects of different inputs , especially if they 're all normalized . , but professor d: , then a decision tree is really good , but here he 's not like he has one , a bunch of very distinct variables , like pitch and this he 's talking about , like , a all these cepstral coefficients , and , in which case a any reasonable classifier is gon na be a mess , and it 's gon na be hard to figure out what professor d: , the other thing that one , this is , a good thing to do , to look at these things at least see what i 'd let me tell you what i would do . i would take just a few features . instead of taking all the mfcc 's , or all the plp 's or whatever , i would just take a couple . ok ? like like c - one , c - two , something like that , so that you can visualize it . and look at these different examples and look at scatter plots . ok , so before you do build up any fancy classifiers , just take a look in two dimensions , at how these things are split apart . that will give you a lot of insight of what is likely to be a useful feature when you put it into a more complicated classifier . and the second thing is , once you actually get to the point of building these classifiers , @ what this lacks so far is the temporal properties . so if you 're just looking at a frame and a time , you anything about , the structure of it over time , and so you may wanna build @ build a markov model of some sort , or else have features that really are based on some bigger chunk of time . professor d: but this is a good place to start . but do n't anyway , this is my suggestion , is do n't just , throw in twenty features at it , the deltas , and the delta del and all that into some classifier , even if it 's k - nearest - neighbors , you still wo n't know what it 's doing , even it 's , to it 's to have a better feeling for what it 's look at som some picture that shows you , " here 's these things , are offer some separation . " and , in lpc , the thing to particularly look at is , is something like , the residual postdoc e: can i ask ? it strikes me that there 's another piece of information , that might be useful and that 's simply the transition . so , w if you go from a transition of silence to overlap versus a transition from silence to speech , there 's gon na be a b a big informative area there , it seems to me . phd c: , because . i . but i is my own vision , of the project . phd c: i the meeting recorder project , for me , has , two , w has several parts , several p objective , because it 's a great project . but , at the first , in the acoustic , parts of the project , you we have two main objective . one one of these is to detect the change , the acoustic change . and for that , if you do n't use , , a speech recognizer , broad class , or not broad class to try to label the different frames , the ike criterion or bic criterion will be enough to detect the change . and probably . i would like to t prove . , probably . when you have , s the transition of speech or silence to overlap zone , this criterion is enough with probably with , this , the more use used normal , regular parameter mf - mfcc . you have to find you can find the mark . you can find the nnn , the acoustic change . but i understand that you your objective is to classify , to know that zone not is only a new zone in the file , that you have , but you have to know that this is overlap zone . because in the future you will try to process that zone with a non - regular speech recognizer model , i suppose . you will pretend to process the overlapping z zone with another algorithm because it 's very difficult to obtain the transcription from using a regular , normal speech recognizer . that , i is the idea . and so the , nnn the { nonvocalsound } the system will have two models . phd c: a model to detect more acc the mor most accurately possible that is p , will be possible the , the mark , the change and another model will @ or several models , to try s but several model robust models , sample models to try to classify the difference class . grad b: i ' m , i did n't understand you what you said . what what model ? phd c: , the classifiers of the n to detect the different class to the different zones before try to recognize , with to transcribe , with a speech recognizer . and my idea is to use , a neural net phd c: with the information we obtain from this study of the parameter with the selected parameter to try to put the class of each frame . for the difference zone phd c: you , have obtained in the first , step with the , bic , criterion compare model and you i do n't - u professor d: because what we had before for , speaker change detection did not include these overlaps . so the first thing is for you to build up something that will detect the overlaps . so again , the first thing to do to detect the overlaps is to look at these , in professor d: , i again , the things you ' ve written up there are way too big . ok ? if you 're talking about , say , twelfth - order mfcc 's like that it 's just way too much . you wo n't be able to look at it . all you 'll be able to do is put it into a classifier and see how it does . whereas if you have things if you pick one or two dimensional things , or three of you have some very fancy display , and look at how the different classes separate themselves out , you 'll have much more insight about what 's going on . professor d: , you 'll get a feeling for what 's happening , so if you look at suppose you look at first and second - order cepstral coefficients for some one of these kinds of things and you find that the first - order is much more effective than the second , and then you look at the third and there 's not and not too much there , you may just take first and second - order cepstral coefficients , right ? and with lpc , lpc per se is n't gon na tell you much more than the other , maybe . , and on the other hand , the lpc residual , the energy in the lpc residual , will say how , the low - order lpc model 's fitting it , which should be pretty poorly for two or more people speaking at the same time , and it should be pretty , for w for one . and so i i again , if you take a few of these things that are prob promising features and look at them in pairs , you 'll have much more of a sense of " ok , i now have , doing a bunch of these analyses , i now have ten likely candidates . " and then you can do decision trees or whatever to see how they combine . phd c: but , i it is the first way to do that and i would like to , your opinion . all this study in the f in the first moment , i w i will pretend to do with equalizes speech . the the equalizes speech , the mixes of speech . phd c: , why ? because the spectral distortion is more a lot clearer , very much clearer if we compare with the pda . pda speech file is it will be difficult . i phd c: fff ! because the n the noise to sp the signal - to - noise relation is low . phd c: i i that the result of the study with this speech , the mix speech will work exactly with the pda files . grad b: it would be interesting in itself to see . , that would be an interesting result . phd c: what , what is the effect of the low ' signal to noise relation , with professor d: n u we , i think it 's not a it 's not unreasonable . it makes sense to start with the simpler signal because if you have features which do n't are n't even helpful in the high signal - to - noise ratio , then there 's no point in putting them into the low signal ratio , one would think , anyway . and so , if you can get @ again , my prescription would be that you would , with a mixed signal , you would take a collection of possible , features look at them , look at how these different classes that you ' ve marked , separate themselves , and then collect , in pairs , and then collect ten of them , and then proceed with a bigger classifier . and then if you can get that to work , then you go to the other signal . and then , and , they wo n't work as , but how m , how much and then you can re - optimize , and so on . grad b: . but it would be interesting to try a couple with both . because it would be interesting to see if some features work with close mixed , and and do n't professor d: that 's , the it it 's true that it also , it could be useful to do this exploratory analysis where you 're looking at scatter plots and so on in both cases . phd c: but what is the relation of the performance when you use the , speech file the pda speech files . , i . but it will be important . because people , different groups has experience with this problem . is is not easy to solve , because if you i have seen the speech file from pda , and s some parts is very difficult because you do n't see the spectrum the spectrogram . phd c: is very difficult to apply , a parameter to detect change when you do n't see . professor d: , that 's another reason why very simple features , things like energy , and things like harmonicity , and residual energy are , are better to use than very complex ones because they 'll be more reliable . phd a: , i maybe this is a dumb question , but w it would be easier if you used a pda phd a: because ca n't you , could n't you like use beam - forming to detect speaker overlaps ? professor d: that , if we made use of the fact that there are two microphones , you do have some location information . which we do n't have with the one and so that 's professor d: , you wanna know whether you can do it with one , because it 's not necessarily true that every device that you 're trying to do this with will have two . professor d: , if , on the other hand , we show that there 's a huge advantage with two , then that could be a real point . but , we do n't n even know yet what the effect of detecting having the ability to detect overlaps is . , maybe it does n't matter too much . so , this is all pretty early stages . postdoc e: there there is a complication though , and that is if a person turns their back to the pda , then some of the positional information goes away ? postdoc e: and then , and if they 're on the access on the axis of it , that was the other thing i was thinking . he you mentioned this last time , that if you 're straight down the midline , then the r the left - right 's gon na be different , postdoc e: and in his case , he 's closer to it anyway . it seems to me that it 's not a p , it 's this the topograph the topology of it is a little bit complicated . phd c: i because the distance between the two microph , microphone , in the pda is very near . but it 's from my opinion , it 's an interesting idea to try to study the binaural problem , with information , because i found difference between the speech from each micro , in the pda . postdoc e: , i know i n i know that 's a very important cue . but i ' m just saying that the way we 're seated around a table , is not the same with respect to each person with respect to the pda , postdoc e: so we 're gon na have a lot of differences with ref respect to the speaker . professor d: that 's so so i @ the issue is , " is there a clean signal coming from only one direction ? " if it 's not coming from just one direction , if it if th if there 's a broader pattern , it means that it 's more likely there 's multiple people speaking , wherever they are . professor d: is it a is it , is there a narrow is there a narrow beam pattern or is it a distributed beam pattern ? so if there 's a distributed beam pattern , then it looks more like it 's , multiple people . wherever you are , even if he moves around . postdoc e: . ok , it just seemed to me that , that this is n't the ideal type of separation . , it 's see the value o professor d: , ideal would be to have the wall filled with them , but but just having two mikes if you looked at that thing on dan 's page , it was when when there were two people speaking , and it looked really different . phd a: did - , b i ' m not what dan 's page is that you mean . he was looking at the two professor d: you take the signal from the two microphones and you cros and you cross - correlate them with different lags . professor d: so when one person is speaking , then wherever they happen to be at the point when they 're speaking , then there 's a pretty big maximum right around that point in the l in the lag . professor d: so if at whatever angle you are , at some lag corresponding to the time difference between the two there , you get this boost in the cross - correlation value function . postdoc e: , let me ask you , if both people were over there , it would be less effective than if one was there and one was across , catty - corner ? phd a: if i was if i was here and morgan was there and we were both talking , it would n't work . postdoc e: next next one over n over on this side of the p pda . good example , the same one i ' m asking . postdoc e: versus you versus , and we 're catty - corner across the table , and i ' m farther away from this one and you 're farther away from that one . grad b: or or even if , like , if people were sitting right across from each other , you could n't tell the difference either . postdoc e: it seems like that would be pretty strong . across the same axis , you do n't have as much to differentiate . professor d: , we d , we do n't have a third dimension there . , so it 's postdoc e: and so my point was just that it 's gon na be differentially varia valuable . , it 's not to say , i certainly think it 's extremely val and we humans n depend on , these binaural cues . professor d: but it 's almost but it 's almost a what you 're talking about i there 's two things . professor d: there 's a sensitivity issue , and then there 's a pathological error issue . so th the one where someone is just right directly in line is a pathological error . professor d: if someone just happens to be sitting right there then we wo n't get good information from it . postdoc e: and i and if there so it and if it 's the two of you guys on the same side professor d: , if they 're close , it 's just a question of the sensitivity . so if the sensitivity is good enough and we just do n't have enough , experience with it to know how postdoc e: i ' m not trying to argue against using it , by any means . wanted to point out that weakness , that it 's topo topologically impossible to get it perfect for everybody . grad b: and dan is still working on it . so . he actually he wrote me about it a little bit , so . professor d: , the other thing you can do , if , i we 're assuming that it would be a big deal just to get somebody convince somebody to put two microphones in the pda . but if you h put a third in , you could put in the other axis . and then you 're , then you could cover professor d: @ but - but that 's , we can we 'll be all of this is there for us to study . professor d: but but , one of the at least one of the things i was hoping to get at with this is what can we do with what we think would be the normal situation if some people get together and one of them has a pda . professor d: right . , that 's the constraint of one question that both adam and i were interested in . but if you can instrument a room , this is really minor league compared with what some people are doing , right ? some people at , at brown and at cape , professor d: they both have these , big arrays on the wall . and , if you could do that , you ' ve got microphones all over the place grad b: , i saw one that was like a hundred microphones , a ten by ten array . phd a: and you could in a noisy room , they could have all kinds of noises and you can zoom right in on somebody . grad b: it was all in software and they and you could pick out an individual beam and listen to it . professor d: but , the reason why i have n't focused on that as the fir my first concern is because , i ' m interested in what happens for people , random people out in some random place where they 're p having an impromptu discussion . and you ca n't just always go , " , let 's go to this heavily instrumented room that we spent tens of thousands of dollars to se to set up " . phd a: no , what you need to do is you 'd have a little fabric thing that you unroll and hang on a wall . it has all these mikes and it has a plug - in jack to the pda . professor d: the other thing actually , that gets at this a little bit of something else i 'd like to do , is what happens if you have two p d and they communicate with each other ? and then , they 're in random positions , the likelihood that , there would n't be any l likely to be any nulls , if you even had two . if you had three or four it 's . grad b: though all sorts of interesting things you can do with that , not only can you do microphone arrays , but you can do all sorts of multi - band as . so it 's it would be neat . phd a: i still like my rug on the wall idea , so if anybody patents that , then grad b: in terms of the research th research , it 's really it 's whatever the person who is doing the research wants to do . grad b: so if jose is interested in that , that 's great . but if he 's not , that 's great too . professor d: , i i i would actually like us to wind it down , see if we can still get to the end of the , birthdays thing there . grad b: , i had a couple things that i did wanna bring out . one is , do we need to sign new these again ? postdoc e: , it 's slightly different . so i would say it would be a good idea . professor d: , this morning we did n't sign anything cuz we said that if anybody had signed it already , we did n't have to . grad b: , i should ' ve checked with jane first , but the ch the form has changed . grad b: so we may wanna have everyone sign the new form . , i had some things i wanted to talk about with the thresholding i ' m doing . grad b: but , if we 're in a hurry , we can put that off . and then also anonymity , how we want to anonymize the data . postdoc e: , should i have some results to present , but i we wo n't have time to do that this time . but it seems like the anonymization is , is also something that we might wanna discuss in greater length . postdoc e: if if we 're about to wind down , what i would prefer is that we , delay the anonymization thing till next week , and i would like to present the results that i have on the overlaps . professor d: , why do n't we , so @ ok . @ it sounds like u , there were a couple technical things people would like to talk about . why do n't we just take a couple minutes to briefly do them , and then and then we postdoc e: i 'd , i 'd prefer to have more time for my results . e could i do that next week maybe ? postdoc e: ok , that 's what i ' m asking . and the anonymization , if y if you want to proceed with that now , think that 's a discussion which also n really deserves a lo a , more that just a minute . postdoc e: i really do think that , because you raised a couple of possibilities yourself , you and i have discussed it previously , and there are different ways that people approach it , e and we should grad b: alright . we 're we 're just we 're getting enough data now that i 'd like to do it now , before i get overwhelmed with once we decide how to do it going and dealing with it . postdoc e: it 's just . ok . i 'll give you the short version , but i do think it 's an issue that we ca n't resolve in five minutes . ok , so the short thing is , we have , tape recording , digitized recor recordings . those we wo n't be able to change . if someone says " hey , roger so - and - so " . so that 's gon na stay that person 's name . now , in terms of like the transcript , the question becomes what symbol are you gon na put in there for everybody 's name , and whether you 're gon na put it in the text where he says " hey roger " or are we gon na put that person 's anonymized name in instead ? grad b: no , because then that would give you a mapping , and you do n't wanna have a mapping . postdoc e: ok , so first decision is , we 're gon na anonymize the same name for the speaker identifier and also in the text whenever the speaker 's name is mentioned . grad b: because that would give you a mapping between the speaker 's real name and the tag we 're using , and we do n't want postdoc e: i do n't think you understood what i said . so , so in within the context of an utterance , someone says " so , roger , what do you think ? " then , it seems to me that , maybe i it seems to me that if you change the name , the transcript 's gon na disagree with the audio , and you wo n't be able to use that . grad b: we do n't we wanna we ha we want the transcript to be " roger " . because if we made the transcript be the tag that we 're using for roger , someone who had the transcript and the audio would then have a mapping between the anonymized name and the real name , and we wanna avoid that . postdoc e: ok , but then there 's this issue of if we 're gon na use this for a discourse type of thing , then and , liz was mentioning in a previous meeting about gaze direction and who 's the addressee and all , then to have " roger " be the thing in the utterance and then actually have the speaker identifier who was " roger " be " frank " , that 's going to be really confusing and make it useless for discourse analysis . postdoc e: now , if you want to , , in some cases , i know that susan ervin - tripp in some of hers , actually did do , a filter of the s signal where the person 's name was mentioned , except postdoc e: and and i cer and i so , the question then becomes one level back . , how important is it for a person to be identified by first name versus full name ? , on the one hand , it 's not a full identity , we 're taking all these precautions , and they 'll be taking precautions , which are probably even the more important ones , to they 'll be reviewing the transcripts , to see if there 's something they do n't like ok . so , maybe that 's enough protection . on the other hand , this is a small pool , and people who say things about topic x e who are researchers and - known in the field , they 'll be identifiable and simply from the first name . however , taking one step further back , they 'd be identifiable anyway , even if we changed all the names . postdoc e: so , is it really , ? now , in terms of like so i did some results , which i 'll report on n next time , which do mention individual speakers by name . now , there , the human subjects committee is very precise . you do n't wanna mention subjects by name in published reports . now , it would be very possible for me to take those data put them in a study , and just change everybody 's name for the purpose of the publication . and someone who looked professor d: . , , t it does n't , i ' m not knowledgeable about this , but it certainly does n't bother me to have someone 's first name in the transcript . postdoc e: , and in the form that they sign , it does say " your first name may arise in the course of the meetings " . professor d: and so so again , th the issue is if you 're tracking discourse things , if someone says , " frank said this " and then you wanna connect it to something later , you ' ve got ta have this part where that 's " frank colon " . postdoc e: , and , even more i , immediate than that just being able to , it just seems like to track from one utterance to the next utterance who 's speaking and who 's speaking to whom , cuz that can be important . s i , " you raised the point , so - and - so " , it 's be to be able to know who " you " was . postdoc e: and ac and actually you remember furthermore , you remember last time we had this discussion of how , i was avoiding mentioning people 's names , postdoc e: and it was and we made the decision that was artificial . , if we 're going to step in after the fact and change people 's names in the transcript , we ' ve done something one step worse . grad b: , i would sug i do n't wanna change the names in the transcript , but that 's because i ' m focused so much on the acoustics instead of on the discourse , and so that 's a really good point . professor d: l let me just back up this to make a brief comment about the , what we 're covering in the meeting . i realize when you 're doing this that , i did n't realize that you had a bunch of things that you wanted to talk about . , and so i was proceeding some somewhat at random , frankly . so what would be helpful would be , i and i 'll mention this to liz and andreas too , that , before the meeting if anybody could send me , any , agenda items that they were interested in and i 'll take the role of organizing them , into the agenda , but i 'd be very pleased to have everyone else completely make up the agenda . i ' ve no desire to make it up , but if no one 's told me things , then i ' m just proceeding from my guesses , and i ye , i ' m it ended up with your out your time to , i ' m just always asking jose what he 's doing , and so it 's there 's , there 's other things going on . postdoc e: , it 's not a problem . not a problem . i just could n't do it in two minutes . grad b: how will we how would the person who 's doing the transcript even know who they 're talking about ? do what i ' m saying ? grad b: , so if i ' m saying in a meeting , " and bob , wanted to do so - and - so " , grad b: if you 're doing , @ they 're just gon na write " bob " . and so . if you 're if you 're doing discourse analysis , postdoc e: , i ' m betting we 're gon na have huge chunks that are just un untranscribable by them . professor d: , they 're gon na say speaker - one , or speaker - two or speaker i grad b: , the current one they do n't do speaker identity . because in naturallyspeaking , or , excuse me , in viavoice , it 's only one person . and so in their current conventions there are no multiple speaker conventions . postdoc e: . that my understanding from yen is it yen - ching ? is that how you pronounce her name ? postdoc e: was that , they will that they will adopt the part of the conventions that we discussed , where they put speaker identifier down . but , h they wo n't know these people , so it 's , they 'll adopt some convention but we have n't specified to them so they 'll do something like speaker - one , speaker - two , is what i bet , but i ' m betting there 'll be huge variations in the accuracy of their labeling the speakers . we 'll have to review the transcripts in any case . professor d: and it and it may very be , since they 're not going to sit there and worry ab about , it being the same speaker , they may very go the first se the first time it changes to another speaker , that 'll be speaker - two . and the next time it 'll be speaker - three even if it 's actually speaker - one . postdoc e: and that 's ok . yes , i was thinking , the temp the time values of when it changes . grad b: the p it 's a good point , " which what do you do for discourse tracking ? " phd c: because y you to know , you do n't need to i what is the iden identification of the speakers . you only want to know professor d: if if someone says , " what is jose doing ? " and then jose says something , you need to know that was jose responding . postdoc e: unless we adopt a different set of norms which is to not i d to make a point of not identifying people by name , which then leads you to be more contextually ex explicit . postdoc e: , people are very flexible . ? , so when we did this las last week , i felt that , now , andreas may , @ , he i sometimes people think of something else at the same time and they miss a sentence , and because he missed something , then he missed the r the initial introduction of who we were talking about , and was unable to do the tracking . but i felt like most of us were doing the tracking and knew who we were talking about and we just were n't mentioning the name . so , people are really flexible . phd a: but , like , at the beginning of this meeting or , you said , or s liz , said something about , " is mari gon na use the equipment ? " , how would you say that ? phd a: it would be really hard if we made a policy where we did n't say names , plus we 'd have to tell everybody else . grad b: , darn ! , what i was gon na say is that the other option is that we could bleep out the names . phd a: the i , my own two cents worth is that you do n't do anything about what 's in the recordings , you only anonymize to the extent you can , the speakers have signed the forms and all . grad b: , but that as i said , that works great for the acoustics , but it hurts you a lot for trying to do discourse . grad b: because you do n't have a map of who 's talking versus their name that they 're being referred to . professor d: ok , so suppose someone says , " i if i really heard what , what jose said . " and then , jose responds . and part of your learning about the dialogue is jose responding to it . but it does n't say " jose " , it says " speaker - five " . so u phd a: , i see , you wanna associated the word " jose " in the dialogue with the fact that then he responded . professor d: and so , if we pass out the data to someone else , and it says " speaker - five " there , we also have to pass them this little guide that says that speaker - five is jose , professor d: and if were gon na do that we might as give them " jose " say it was " jose " . postdoc e: now , that we have these two phases in the data , which is the one which is o our use , university of washington 's use , ibm , sri . and within that , it may be that it 's sufficient to not change the to not incorporate anonymization yet , but always , always in the publications we have to . and also , when we take it that next step and distribute it to the world , we have to . but i but i don that 's a long way from now and it 's a matter of between now and then of d of deciding how postdoc e: i it , it may be s that we 'll need to do something like actually x out that part of the audio , and just put in brackets " speaker - one " . grad b: , what we could do also is have more than one version of release . one that 's public and one that requires licensing . and so the licensed one would w we could it would be a sticky limitation . , like , we can talk about that later . postdoc e: that 's risky . that the public should be the same . that when we do that world release , it should be the same . professor d: that we have a need to have a consistent licensing policy of some sort , and phd a: , one thing to take into consideration is w are there any , the people who are funding this work , they want this work to get out and be useful for discourse . if we all of a sudden do this and then release it to the public and it 's not longer useful for discourse , grad b: , depending on how much editing we do , you might be able to still have it useful . because for discourse you do n't need the audio . so you could bleep out the names in the audio . and use the anonymized one through the transcript . professor d: she no , but she 's saying , from the argument before , she wants to be able to say if someone said " jose " in their thing , and then connect to so to what he said later , then you need it . grad b: but in the transcript , you could say , everywhere they said " jose " that you could replace it with " speaker - seven " . grad b: and then it would n't meet match the audio anymore . but it would be still useful for the professor d: and th and the other thing is if liz were here , what she might say is that she wants to look if things that cut across between the audio and the dialogue , postdoc e: - . we have to think about w @ how . that this ca n't be decided today . postdoc e: but it 's g but it was good to introduce the thing and we can do it next time . grad b: i did n't think when i wrote you that email i was n't thinking it was a big can of worms , but i it is . postdoc e: it discourse , also i wanted to make the point that discourse is gon na be more than just looking at a transcript . postdoc e: it 's gon na be looking at a t , and prosod prosodic is involved , and that means you 're going to be listening to the audio , and then you come directly into this confronting this problem . phd a: maybe we should just not allow anybody to do research on discourse , and then , we would n't have to worry about it . professor d: , maybe we should only have meetings between people who one another and who are also amnesiacs who their own name . postdoc e: we could have little labels . i wanna introduce my reservoir dogs solution again , which is everyone has like " mister white " , " mister pink " , " mister blue " . grad b: did you read the paper a few years ago where they were reversing the syllables ? they were di they had the utterances . and they would extract out the syllables and they would play them backwards . phd a: but so , the syllables were in the same order , with respect to each other , but the acous grad b: everything was in the same order , but they were the individual syll syllables were played backwards . and you could listen to it , and it would sound the same . grad b: people had no difficulty in interpreting it . so what we need is something that 's the reverse , that a speech recognizer works exactly the same on it but people ca n't understand it . professor d: , that 's there 's an easy way to do that . jus - jus just play it all backwards . phd a: it would be fun sometime to read them with different intonations . like as if you were talking like , " nine eight six eight seven ? " postdoc e: , in the one i transcribed , i did find a couple instances i found one instance of contrastive stress , where it was like the string had a li so it was like " nine eight two four , nine two four " . postdoc e: , they differed . , at that session i did feel like they did it more as sentences . but , sometimes people do it as phone numbers . , i ' ve i am interested in and sometimes , i s and i never know . when i do it , i ask myself what i ' m doing each time . phd a: , . , i was thinking that it must get boring for the people who are gon na have to transcribe this postdoc e: i like your question intonation . that 's very funny . i have n't heard that one . grad b: we have the transcript . we have the actual numbers they 're reading , so we 're not necessarily depending on that . ok , i ' m gon na go off . ###summary: the berkeley meeting recorder group discussed research aims and corresponding concerns for future data collection. it was agreed that a substantial amount of meeting data is required from different domains , and comprising several speakers , to perform the types of discourse and acoustic analyses desired. ongoing efforts by speaker mn005 to automatically detect regions of speaker overlap were considered. it was suggested that speaker mn005 focus on a small set of acoustic parameters , e.g . energy and harmonics-related features , to distinguish regions of overlap from those containing the speech of just one speaker. disk space issues were discussed. and , finally , the problem of speaker anonymization was explored. recordings must be of existing meetings that are conducted in english. participants should ideally consist of professors and doctoral students , but no undergraduate students , who are willing to record their meetings at icsi. the meeting recorder corpus should comprise data from a large number of speakers representing different domains. attempts should also be made to optimize the speaker population for generating good language models. speaker me011 will pursue volunteers from the haas business school to record their weekly meetings at icsi. a tentative decision was made to offer participants a recorded version of their meeting on a cd rom once the transcript screening phase is complete for that meeting. non-native speakers with a low proficiency in english are problematic for language modelling. the prospect of creating another recording setup requires the elimination certain more complicated dimensions of the existing setup , e.g . the use of close-talking and far-field microphones. speaker anonymization poses problems for the transcription proccess , and also discourse analysis , as it makes it more difficult to track who is speaking and to whom a particular utterance is being addressed. as the current version of transcriptions does not include speaker identification labels , no multiple speaker transcription conventions are in use. research aims and corresponding concerns for future data collection were discussed. a student researcher will be working with speaker fe016 to investigate different strategies for automatically summarizing meetings , and identifying discussional 'hotspots'. efforts by speaker mn005 are ongoing to detect regions of speaker overlap in the signal. a total of 230 regions of overlapping speech have been manually transcribed for a subset of meeting data. supervised clustering and neural networks are being considered as means for classifying overlap. it was suggested that speaker mn005 focus on a small set of acoustic parameters , e.g . energy and harmonics-related features , to use the mixed signal to distinguish regions of overlap from those containing the speech of just one speaker. future work may also involve focussing on additional signals , and using a markov model to analyze acoustic parameters over larger time frames. beam-forming was suggested as an alternate method of detecting overlapping speech. efforts are ongoing to select an optimal method for anonymizing speakers. more disk space is gradually being made available for the storage of new meeting recorder data.
6
grad a: - . good . i know that he 's going to like , taiwan and other places to eat . so . grad a: so that 's why keith and i are going to be a little dazed for the first half m the meeting . grad d: , how di how d exactly did , that paper lead to anti - lock brakes ? grad c: , liz suggested we could start off by , doing the digits all at the same time . grad d: all at the same time . i if i would get distracted and confused , probably . grad a: are we gon na start all our meetings out that way from now on ? . too bad . i kinda like it . grad d: are we to r just to make i 's going on , we 're talking about robert 's thesis proposal today ? is that professor e: , you had s you said there were two things that you might wanna do . one was rehearse your i talk grad c: not not rehearse , i have just not spent any time on it , so show you what i ' ve got , get your input on it , and maybe some suggestions , that would be great . and the same is true for the proposal . i will have time to do some revision and some additional on various airplanes and trains . so , . i how much of a chance you had to actually read it grad c: and they will be incorporated . , the it says , " this is construal " , and then it continues to say that one could potentially build a probabilistic relational model that has some general , domain - general rules how things are construed , and then the idea is to use ontology , situation , user , and discourse model to instantiate elements in the classes of the probabilistic relational model to do some inferences in terms of what is being construed as what in our beloved tourism domain . but , with a focus on professor e: , no i s i see this has got the castle in it , and like that . grad d: , maybe the version i did n't have that i mine the w did the one you sent on the email have the that was the most recent one ? grad c: , if you would have checked your email you may have received a note from yees asking you to send me the , up - to - d grad a: we can talk about it later . that 's not even ready , so . , ok ! go on t to , whatever . grad a: i ' m making changes . do n't worry about that . ok . mmm - mmm . ! ok , go on . grad f: there 's only one " s " in " interesting " . there 's only one " s " in " interesting " . on page five . grad c: the twenty - ninth . that 's when i ' m meeting with wolfgang wahlster to sell him this idea . ok ? then i ' m also going to present a little talk at eml , about what we have done here and so , i ' m gon na start out with this slide , so the most relevant aspects of our stay here , and , then i ' m asking them to imagine that they 're standing somewhere in heidelberg and someone asks them in the morning the cave forty - five is a - known discotheque which is certainly not open at that time . and so they 're supposed to imagine that , do they think the person wants to go there , or just know where it is ? which is probably not , the case in that discotheque example , or in the bavaria example , you just want to know where it is . and . so we can make a point that here is ontological knowledge but if it 's nine pm in the evening then the discotheque question would be , one that might ask for directions instead of just location . , and . that 's motivating it . then what have we done so far ? we had our little bit of , smartkom , that we did , everth grad c: that 's the not the construction parser . that 's the , tablet - based parser , grad c: twelve ? ok . and , and fey is doing the synthesis as we speak . that 's all about that . then i ' m going to talk about the data , these things about , actually i have an example , probably . two s can you hear that ? or should i turn the l volume on . grad c: the you can observe that all the time , they 're trying to match their prosody onto the machine . grad c: ok . and and . , i will talk about our problems with the rephrasing , and how we solved it , and some preliminary observations , also , i ' m not gon na put in the figures from liz , but it would interesting to , , point out that it 's the same . , as in every human - human telephone conversation , and the human - computer telephone conversation is quite d quite different from , some first , observations . then feed you back to our original problem cuz , how to get there , what actually is happening there today , and then maybe talk about the big picture here , e tell a little bit as much as about the ntl story . i wa i do wanna , i ' m not quite about this , whether i should put this in , that , you have these two different ideas that are or two different camps of people envisioning how language understanding works , and then , talk a bit about the embodied and simulation approach favored here and as a prelude , i 'll talk about monkeys in italy . and , srini was gon na send me some slides but he did n't do it , so from but i have the paper , make a resume of that , and then i stole an x - schema from one of your talks . grad c: , and that 's now i ' m not going to bring that . so that 's what i have , so far , and the rest is for airplanes . so x - schemas , then , i would like to do talk about the construction aspect and then at the end about our bayes - net . end of story . anything i forgot that we should mention ? , maybe the fmri . should i mention the fact that , we 're also actually started going to start to look at people 's brains in a more direct way ? grad a: you might just wanna like , tack that on , as a comment , to something . professor e: , the time to mention it , if you mention it , is when you talk about mirror neurons , then you should talk about the more recent , about the kicking and , the , and that the plan is to see to what extent the you 'll get the same phenomena with stories about this , so that and that we 're planning to do this , which , we are . so that 's one thing . depends . , there is a , whole language learning story , ok ? which , actually , i even on your five - layer slide , you ' ve got an old one that leaves that off . grad c: , i do have it here . and , , the big picture is this bit . but , it would but i do n't think i am capable of do pulling this off and doing justice to the matter . , there is interesting in her terms of how language works , so the emergentism story would be to be , it would be to tell people how what 's happening there , plus how the , language learning works , professor e: what you might wanna do is , and may not , but you might wanna this is rip off a bunch of the slides on the anal there the there we ' ve got various i generations of slides that show language analysis , and matching to the underlying image schemas , and , how the construction and simulation that ho that whole th grad c: , i , but i also have you trash you left over , your quals and your triple - ai . professor e: the quals w the quals slides would be fine . you could get it out of there , or some grad a: which even email you then , like there probably was a little few changes , not a big deal . , you could steal anything you want , i do n't care . which you ' ve already done , . grad c: i might even mention that this work you 're doing is also with the mpi in leipzig , so . professor e: , it 's different , this is the , dna building , or someth the double helix building . professor e: the it was it turns out that if you have multiple billions of dollars , y you can do all sorts of weird things , and grad d: , they 're building a building in the shape of dna , is that what you said ? professor e: you d you really now i spent the last time i was there i spent maybe two hours hearing this story which is , professor e: , no , y i there 's infinite money . see you th you then fill it with researchers . grad a: and give them more money . they just want a fun place for them to work . grad c: , the offices are actually a little the , think of , ramps , coming out of the double helix and then you have these half - domes , glass half - domes , and the offices are in the glass half - dome . professor e: so , that 's a good point , th that the date , the , a lot of the this is interacting with , people in italy but also definitely the people in leipzig the b the combination of the biology and the leipzig connection might be interesting to these guys , anyway ! enough of that , let 's talk about your thesis proposal . grad f: you might want to , double - check the spellings of the authors ' names on your references , you had a few , misspells in your slides , there . like i believe you had " jackendorf " . , unless there 's a person called " jackendorf " , grad a: i 'll probably i c might have i 'll probably have comments for you separately , not important . grad c: i was ac actually worried about bibtex . no , that 's quite possible . that 's copy and paste from something . professor e: so i did note i it looks like the , metaphor did n't get in yet . professor e: , s reference is one thing , the question is there any place , did you put in something about , professor e: , the individual , we 'd talked about putting in something about people had , ok . good . i see where you have it . so the top of the second of pa page two you have a sentence . but , what i meant is , even before you give this , to wahlster , you should , unless you put it in the text , and i do n't think it 's there yet , about we talked about is the , scalability that you get by , combining the constructions with the general construal mechanism . is that in there ? professor e: , ok , so where is it , cuz i 'll have to take a look . grad c: , but i did not focus on that aspect but , ehhh , it 's just underneath , , that reference to metaphor . so it 's the last paragraph before two . so on page two , the main focus but that 's really professor e: no , it s says it but it does n't say it does n't it d professor e: , it does n't give the punch line . cuz let me tell the gang what the punch line is , because it 's actually important , which is , that , the constructions , that , nancy and keith and friends are doing , are , in a way , quite general but cover only base cases . and to make them apply to metaphorical cases and metonymic cases and all those things , requires this additional mechanism , of construal . and the punch line is , he claimed , that if you do this right , you can get essentially orthogonality , that if you introduce a new construction at the base level , it should com , interact with all the metonymies and metaphors so that all of the projections of it also should work . and , similarly , if you introduce a new metaphor , it should then , compose with all of the constructions . and it to the extent that 's true then it 's a big win over anything that exists . grad d: so does that mean instead of having tons and tons of rules in your context - free grammar you just have these base constructs and then a general mechanism for coercing them . professor e: - . so that , , in the metaphor case , that you have a direct idea of a source , path , and goal and any metaphorical one and abstract goals and all that you can do the same grammar . professor e: and it is the same grammar . but , the trick is that the way the construction 's written it requires that the object of the preposition be a container . , " trouble " is n't a container , but it gets constr construed as a c container . grad d: so with construal you do n't have to have a construction for every possible thing that can fill the rule . professor e: so 's it 's a very big deal , i in this framework , and the thesis proposal as it stands does n't , i do n't think , say that as clearly as it could . grad c: no , it does n't say it . no . even though one could argue what if there are basic cases , even . , it seems like nothing is context - free . professor e: , nothing is context - free , but there are basic cases . that is , there are physical containers , there are physical paths , there et cetera . grad c: but " walked into the cafe and ordered a drink , " and " walked into the cafe and broke his nose , " that 's professor e: , a cafe can be construed as a container , or it can be construed as a obstacle , or as some physical object . so there are multiple construals . and that 's part of what has to be done . this is why there 's this interaction between the analysis and the construal . the b the double arrow . so , , it does n't magically make ambiguity go away . professor e: but it does say that , if you walked into the cafe and broke your nose , then you are construing the cafe as an obstacle . and if that 's not consistent with other things , then you ' ve got ta reject that reading . grad d: you con you conditioned me with your first sentence , and so , " why would he walk into the cafe and then somehow break his nose ? " , grad c: you do n't find that usage , i checked for it in the brown national corpus . the " walk into it " never really means , w as in walked smack grad c: , but , y if you find " walked smacked into the cafe " or " slammed into the wall " professor e: , no , but " run into " does . because you will find " run into , " grad a: , " run into " might even be more impact sense than , container sense . professor e: but like , " run into an old friend " , it probably needs its own construction . , , george would have i ' m some exa complicated ex reason why it really was an instance of something else professor e: and maybe it is , but , there are idioms and my is that 's one of them , but , i . grad f: but , no , i it has a life of its own . it 's partially inspired by the spatial professor e: , this is this motivated but , mo for , motivated , but then you ca n't parse on motivated . grad a: there 's there 's lots of things you could make t - shirts out of , but , this has gotten wh we do n't need the words to that . professor e: anything else you want to ask us about the thesis proposal , you got we could look at a particular thing and give you feedback on it . grad c: there actually the i what would have been really is to find an example for all of this , from our domain . so maybe if we w if we can make one up now , that would be c incredibly helpful . grad a: so , w where it should illustrate wh when you say all this , do you mean , like , i , the related work , grad c: and y it 's , we have some constructions and then it 's construed as something , and then we may get the same constructions with a metaphorical use that 's also relevant to the domain . professor e: ok , f let 's suppose you use " in " and " on " . , that 's what you started with . so " in the bus " and " on the bus , " , that 's actually a little tricky in english because to some extent they 're synonyms . grad c: on the bus is a m is a metaphorical metonymy that relates some meta path metaphorically and you 're on that path and th w it 's he there 's a platform notion , grad c: he 's on the standing on the bus waving to me . but th the regular as we speak " j johno was on the bus to new york , " , he 's that 's , what did i call it here , the transportation schema , something , where you can be on the first flight , on the second flight , and you can be , on the wagon . professor e: right . so so that may or may not be what you want to do . you could do something much simpler like " under the bus , " , where grad c: but it 's unfortunately , this is not really something a tourist would ever say . so . grad c: we had we had initially we 'd started discussing the " out of film . " and there 's a lot of " out of " analysis , so , could we capture that with a different construal of grad a: , it 's a little it 's , we ' ve thought about it before , t to use the examples in other papers , and it 's a little complicated . cuz you 're like , it 's a state of there 's resource , grad a: right , and like , what is film , the state . you 're out of the state of having film , right ? and somehow film is standing for the re the resour the state of having some resource is just labeled as that resource . grad f: but and plus the fact that there 's also s , can you say , like , " the film ran out " or , maybe you could say something like " the film is out " grad f: so like the film went away from where it should be , namely with you , the the film is gone , i never really knew what was going on , i find it a little bit farfetched to say that " i ' m out of film " means that i have left the state of having film like that , professor e: or being out of something as , as . so " running out of it " definitely has a process aspect to it . grad c: is the d the final state of running out of something is being out of it . professor e: , so nob so no one has in of the , professional linguists , they have n't there was this whole thesis on " out of " . professor e: there was a , it may be just " out " . there was " over " but there was also a paper on " out " . professor e: ok . but anyway . we 're not gon na do that between now and next week . grad a: but . it 's not one of the y it 's more straightforward ones forward ones to defend , so you probably do n't want to use it for the purposes th these are you 're addressing like , computational linguists , right . or are you ? grad c: computer it 's more there 's going to be the just four computational linguists , by coincidence , but the rest is , whatever , biocomputing people and physicists . professor e: no no , but not for your talk . i ' m - we 're worrying about the th the thes grad a: , like so i would try to i would stay away from one that involves weird construal . grad c: is there a bakery around here . so if you c we really just construe it as a grad c: no , it 's the bakery itself is it a building ? , that you want to go to ? or is it something to eat that you want to buy ? professor e: no , no . the question is d do you wanna construe do you wanna constr - strue professor e: r exactly . it 's because do you wanna c do you want to view the bakery as a p a place that i , if y professor e: th , that 's one . you want to buy something . but the other is , yo you might have smelled a smell and are just curious about whether there 'd be a bakery in the neighborhood , or , pfff , you wonder how people here make their living , and there 're all sorts of reasons why you might be asking about the existence of a bakery that does n't mean , " i want to buy some baked goods . " those are interesting examples but it 's not clear that they 're mainly construal examples . professor e: so let 's so let 's think about this from the point of view of construal . so let 's first do a so the metonymy thing is probably the easiest and a and actually the though , the one you have is n't quite professor e: n no not that one , that 's a the background . this is the t , page five . grad a: ellipsis . like , " it " does n't refer to " thing , " it refers to acti , j thing standing for activ most relevant activity for a tourist you could think of it that way , grad c: , my argument here is it 's the same thing as " plato 's on the top shelf , " grad c: i ' m con , th that you can refer to a book of plato by using " plato , " grad c: and you can refer back to it , and so you can castles have as tourist sites , have admission fees , so you can say " where is the castle , how much does it cost ? " how far is it from here ? so , you 're also not referring to the width of the object , or so , www . professor e: ok . can we think of a metaphorical use of " where " in the tourist 's domain ? so it 's you can sometimes use " where " f for " when " professor e: in the sense of , where wh where was , where was heidelberg , in the thirty years ' war ? grad a: or like its developmental state like that , you could i you could get that . grad f: , there 's also things like , s i could ask something like " where can i find out about blah - blah " in a does n't nece i do n't necessarily have to care about the spatial location , just give me a phone number professor e: , " where could i learn its opening hours , " . but that 's not metaphorical . it 's another so we 're thinking about , or we could also think about , professor e: it i but it 's a state and the issue is , is that it may be just a usage , professor e: ! how about i , " i ' m in a state of exhaustion " ? professor e: a st , you can certainly say , , " i ' m in overload . " tu - stur tourists will often say that . grad a: there 're too there 're all sorts of fixed expressions i do n't like i ' m out of sorts now ! like " i ' m in trouble ! " grad c: i when , just f u the data that i ' ve looked at so far that rec , there 's tons of cases for polysemy . so , mak re making reference to buildings as institutions , as containers , as build , whatever . , so ib in mus , in museums , as a building or as something where pictures hang versus , ev something that puts on exhibits , grad a: why do n't you want to use any of those ? so y you do n't wanna use one that 's grad c: but the argument should be , can be made that , despite the fact that this is not the most met metaphorical domain , because people interacting with hti systems try to be straightforward and less lyrical , construal still is , , completely , key in terms of finding out any of these things , so , . professor e: so that 's a reasonable point , that it in this domain you 're gon na get less metaphor and more metonymy . grad c: we , i with a i looked with a student i looked at the entire database that we have on heidelberg for cases of metonymy . grad c: hardly anything . so not even in descriptions w did we find anything , relevant . grad c: maybe the " where is something " question as a whole , can be construed as , u i locational versus instructional request . grad a: instruction . , directions ? . , that was definitely treated as an example of construal . right ? grad c: but then you 're not on the lexical level , that 's one level higher . professor e: you know , where r what color was this in the nain nineteenth century . what was this p instead of wh what how was this painted , what color was this painted , was this alleyway open . grad c: , maybe we can include that also in our second , data run . we c we can show people pictures of objects and then have then ask the system about the objects and engage in conversation on the history and the art and the architecture and . professor e: ok . so why do n't we plan to give you feedback electronically . wish you a good trip . all success . grad d: for some reason when you said " feedback electronically " of that you ever see the simpsons where they 're like the family 's got the buzzers and they buzz each other when they do n't like what the other one is saying ?
the meeting was taken up by discussion about a thesis proposal and a talk about to take place at eml. the latter will present the work that is currently being done at icsi including examples of inference of user intentions and of the recordings of the on-going data collection. the talk will also outline the theoretical ( x-schemas , image schemas , bayes-nets ) and neural background. the thesis proposal , on the other hand , presents the idea of "construal" and makes claims as to how inferences are drawn in a probabilistic relational model by using information from the ontology , situation , user and discourse models. it was advised that more emphasis should be put on the role of construal in the understanding of metaphor and metonymy. base constructions deal with the norm , while further general domain mechanisms determine how the constructions are invoked depending on the context. several potential examples of polysemy were discussed in detail: "walk/run into" , "on the bus" , "out of film" , "where is x?". however , none of them was an example of lexical polysemy resolved by construal straightforward enough to include in the proposal; the tourist domain is not metaphor rich. as the talk at eml will also refer to a theoretical framework , it was suggested that along with presenting ntl and the piece on mirror neurons , it also alludes to relevant fmri work. the neural side of the research could be of interest to various groups. the language analysis itself will be introduced in terms of image schemas. on the other hand , it was arranged for more feedback on the thesis proposal to be sent by email. the latest version of the construction formalism will also be needed to complete the presentation. it was noted that the thesis proposal did not put any emphasis on metaphor or on the scalability achieved by combining constructions with general construal mechanisms: constructions cover base cases , while metaphoric and metonymic cases are resolved with the extra help provided by construal. during this discussion , however , the suggested examples of polysemy , which is tackled by the proposed framework , were not straightforward enough. it was agreed that metaphors are not abundant in the spatial domain. some of the candidates discussed were phrases like "on the bus" , "out of film" , non-spatial uses of "where" or other fixed expressions. the thesis proposal focuses on how construal works in the tourist domain. one can build a probabilistic relational model with domain general rules that define how ontology , situation , user and discourse models combine to infer intentions. the talk for eml includes a presentation of the motivation behind the project , as well as the work already done on smartkom ( parser , generator , synthesis ). furthermore , the data collection currently taking place and some preliminary observations are going to be outlined. all this is going to be given from the perspective of the general theoretical framework , ntl , with further explanations on x-schemas and also the embodied simulation approach.
###dialogue: grad a: - . good . i know that he 's going to like , taiwan and other places to eat . so . grad a: so that 's why keith and i are going to be a little dazed for the first half m the meeting . grad d: , how di how d exactly did , that paper lead to anti - lock brakes ? grad c: , liz suggested we could start off by , doing the digits all at the same time . grad d: all at the same time . i if i would get distracted and confused , probably . grad a: are we gon na start all our meetings out that way from now on ? . too bad . i kinda like it . grad d: are we to r just to make i 's going on , we 're talking about robert 's thesis proposal today ? is that professor e: , you had s you said there were two things that you might wanna do . one was rehearse your i talk grad c: not not rehearse , i have just not spent any time on it , so show you what i ' ve got , get your input on it , and maybe some suggestions , that would be great . and the same is true for the proposal . i will have time to do some revision and some additional on various airplanes and trains . so , . i how much of a chance you had to actually read it grad c: and they will be incorporated . , the it says , " this is construal " , and then it continues to say that one could potentially build a probabilistic relational model that has some general , domain - general rules how things are construed , and then the idea is to use ontology , situation , user , and discourse model to instantiate elements in the classes of the probabilistic relational model to do some inferences in terms of what is being construed as what in our beloved tourism domain . but , with a focus on professor e: , no i s i see this has got the castle in it , and like that . grad d: , maybe the version i did n't have that i mine the w did the one you sent on the email have the that was the most recent one ? grad c: , if you would have checked your email you may have received a note from yees asking you to send me the , up - to - d grad a: we can talk about it later . that 's not even ready , so . , ok ! go on t to , whatever . grad a: i ' m making changes . do n't worry about that . ok . mmm - mmm . ! ok , go on . grad f: there 's only one " s " in " interesting " . there 's only one " s " in " interesting " . on page five . grad c: the twenty - ninth . that 's when i ' m meeting with wolfgang wahlster to sell him this idea . ok ? then i ' m also going to present a little talk at eml , about what we have done here and so , i ' m gon na start out with this slide , so the most relevant aspects of our stay here , and , then i ' m asking them to imagine that they 're standing somewhere in heidelberg and someone asks them in the morning the cave forty - five is a - known discotheque which is certainly not open at that time . and so they 're supposed to imagine that , do they think the person wants to go there , or just know where it is ? which is probably not , the case in that discotheque example , or in the bavaria example , you just want to know where it is . and . so we can make a point that here is ontological knowledge but if it 's nine pm in the evening then the discotheque question would be , one that might ask for directions instead of just location . , and . that 's motivating it . then what have we done so far ? we had our little bit of , smartkom , that we did , everth grad c: that 's the not the construction parser . that 's the , tablet - based parser , grad c: twelve ? ok . and , and fey is doing the synthesis as we speak . that 's all about that . then i ' m going to talk about the data , these things about , actually i have an example , probably . two s can you hear that ? or should i turn the l volume on . grad c: the you can observe that all the time , they 're trying to match their prosody onto the machine . grad c: ok . and and . , i will talk about our problems with the rephrasing , and how we solved it , and some preliminary observations , also , i ' m not gon na put in the figures from liz , but it would interesting to , , point out that it 's the same . , as in every human - human telephone conversation , and the human - computer telephone conversation is quite d quite different from , some first , observations . then feed you back to our original problem cuz , how to get there , what actually is happening there today , and then maybe talk about the big picture here , e tell a little bit as much as about the ntl story . i wa i do wanna , i ' m not quite about this , whether i should put this in , that , you have these two different ideas that are or two different camps of people envisioning how language understanding works , and then , talk a bit about the embodied and simulation approach favored here and as a prelude , i 'll talk about monkeys in italy . and , srini was gon na send me some slides but he did n't do it , so from but i have the paper , make a resume of that , and then i stole an x - schema from one of your talks . grad c: , and that 's now i ' m not going to bring that . so that 's what i have , so far , and the rest is for airplanes . so x - schemas , then , i would like to do talk about the construction aspect and then at the end about our bayes - net . end of story . anything i forgot that we should mention ? , maybe the fmri . should i mention the fact that , we 're also actually started going to start to look at people 's brains in a more direct way ? grad a: you might just wanna like , tack that on , as a comment , to something . professor e: , the time to mention it , if you mention it , is when you talk about mirror neurons , then you should talk about the more recent , about the kicking and , the , and that the plan is to see to what extent the you 'll get the same phenomena with stories about this , so that and that we 're planning to do this , which , we are . so that 's one thing . depends . , there is a , whole language learning story , ok ? which , actually , i even on your five - layer slide , you ' ve got an old one that leaves that off . grad c: , i do have it here . and , , the big picture is this bit . but , it would but i do n't think i am capable of do pulling this off and doing justice to the matter . , there is interesting in her terms of how language works , so the emergentism story would be to be , it would be to tell people how what 's happening there , plus how the , language learning works , professor e: what you might wanna do is , and may not , but you might wanna this is rip off a bunch of the slides on the anal there the there we ' ve got various i generations of slides that show language analysis , and matching to the underlying image schemas , and , how the construction and simulation that ho that whole th grad c: , i , but i also have you trash you left over , your quals and your triple - ai . professor e: the quals w the quals slides would be fine . you could get it out of there , or some grad a: which even email you then , like there probably was a little few changes , not a big deal . , you could steal anything you want , i do n't care . which you ' ve already done , . grad c: i might even mention that this work you 're doing is also with the mpi in leipzig , so . professor e: , it 's different , this is the , dna building , or someth the double helix building . professor e: the it was it turns out that if you have multiple billions of dollars , y you can do all sorts of weird things , and grad d: , they 're building a building in the shape of dna , is that what you said ? professor e: you d you really now i spent the last time i was there i spent maybe two hours hearing this story which is , professor e: , no , y i there 's infinite money . see you th you then fill it with researchers . grad a: and give them more money . they just want a fun place for them to work . grad c: , the offices are actually a little the , think of , ramps , coming out of the double helix and then you have these half - domes , glass half - domes , and the offices are in the glass half - dome . professor e: so , that 's a good point , th that the date , the , a lot of the this is interacting with , people in italy but also definitely the people in leipzig the b the combination of the biology and the leipzig connection might be interesting to these guys , anyway ! enough of that , let 's talk about your thesis proposal . grad f: you might want to , double - check the spellings of the authors ' names on your references , you had a few , misspells in your slides , there . like i believe you had " jackendorf " . , unless there 's a person called " jackendorf " , grad a: i 'll probably i c might have i 'll probably have comments for you separately , not important . grad c: i was ac actually worried about bibtex . no , that 's quite possible . that 's copy and paste from something . professor e: so i did note i it looks like the , metaphor did n't get in yet . professor e: , s reference is one thing , the question is there any place , did you put in something about , professor e: , the individual , we 'd talked about putting in something about people had , ok . good . i see where you have it . so the top of the second of pa page two you have a sentence . but , what i meant is , even before you give this , to wahlster , you should , unless you put it in the text , and i do n't think it 's there yet , about we talked about is the , scalability that you get by , combining the constructions with the general construal mechanism . is that in there ? professor e: , ok , so where is it , cuz i 'll have to take a look . grad c: , but i did not focus on that aspect but , ehhh , it 's just underneath , , that reference to metaphor . so it 's the last paragraph before two . so on page two , the main focus but that 's really professor e: no , it s says it but it does n't say it does n't it d professor e: , it does n't give the punch line . cuz let me tell the gang what the punch line is , because it 's actually important , which is , that , the constructions , that , nancy and keith and friends are doing , are , in a way , quite general but cover only base cases . and to make them apply to metaphorical cases and metonymic cases and all those things , requires this additional mechanism , of construal . and the punch line is , he claimed , that if you do this right , you can get essentially orthogonality , that if you introduce a new construction at the base level , it should com , interact with all the metonymies and metaphors so that all of the projections of it also should work . and , similarly , if you introduce a new metaphor , it should then , compose with all of the constructions . and it to the extent that 's true then it 's a big win over anything that exists . grad d: so does that mean instead of having tons and tons of rules in your context - free grammar you just have these base constructs and then a general mechanism for coercing them . professor e: - . so that , , in the metaphor case , that you have a direct idea of a source , path , and goal and any metaphorical one and abstract goals and all that you can do the same grammar . professor e: and it is the same grammar . but , the trick is that the way the construction 's written it requires that the object of the preposition be a container . , " trouble " is n't a container , but it gets constr construed as a c container . grad d: so with construal you do n't have to have a construction for every possible thing that can fill the rule . professor e: so 's it 's a very big deal , i in this framework , and the thesis proposal as it stands does n't , i do n't think , say that as clearly as it could . grad c: no , it does n't say it . no . even though one could argue what if there are basic cases , even . , it seems like nothing is context - free . professor e: , nothing is context - free , but there are basic cases . that is , there are physical containers , there are physical paths , there et cetera . grad c: but " walked into the cafe and ordered a drink , " and " walked into the cafe and broke his nose , " that 's professor e: , a cafe can be construed as a container , or it can be construed as a obstacle , or as some physical object . so there are multiple construals . and that 's part of what has to be done . this is why there 's this interaction between the analysis and the construal . the b the double arrow . so , , it does n't magically make ambiguity go away . professor e: but it does say that , if you walked into the cafe and broke your nose , then you are construing the cafe as an obstacle . and if that 's not consistent with other things , then you ' ve got ta reject that reading . grad d: you con you conditioned me with your first sentence , and so , " why would he walk into the cafe and then somehow break his nose ? " , grad c: you do n't find that usage , i checked for it in the brown national corpus . the " walk into it " never really means , w as in walked smack grad c: , but , y if you find " walked smacked into the cafe " or " slammed into the wall " professor e: , no , but " run into " does . because you will find " run into , " grad a: , " run into " might even be more impact sense than , container sense . professor e: but like , " run into an old friend " , it probably needs its own construction . , , george would have i ' m some exa complicated ex reason why it really was an instance of something else professor e: and maybe it is , but , there are idioms and my is that 's one of them , but , i . grad f: but , no , i it has a life of its own . it 's partially inspired by the spatial professor e: , this is this motivated but , mo for , motivated , but then you ca n't parse on motivated . grad a: there 's there 's lots of things you could make t - shirts out of , but , this has gotten wh we do n't need the words to that . professor e: anything else you want to ask us about the thesis proposal , you got we could look at a particular thing and give you feedback on it . grad c: there actually the i what would have been really is to find an example for all of this , from our domain . so maybe if we w if we can make one up now , that would be c incredibly helpful . grad a: so , w where it should illustrate wh when you say all this , do you mean , like , i , the related work , grad c: and y it 's , we have some constructions and then it 's construed as something , and then we may get the same constructions with a metaphorical use that 's also relevant to the domain . professor e: ok , f let 's suppose you use " in " and " on " . , that 's what you started with . so " in the bus " and " on the bus , " , that 's actually a little tricky in english because to some extent they 're synonyms . grad c: on the bus is a m is a metaphorical metonymy that relates some meta path metaphorically and you 're on that path and th w it 's he there 's a platform notion , grad c: he 's on the standing on the bus waving to me . but th the regular as we speak " j johno was on the bus to new york , " , he 's that 's , what did i call it here , the transportation schema , something , where you can be on the first flight , on the second flight , and you can be , on the wagon . professor e: right . so so that may or may not be what you want to do . you could do something much simpler like " under the bus , " , where grad c: but it 's unfortunately , this is not really something a tourist would ever say . so . grad c: we had we had initially we 'd started discussing the " out of film . " and there 's a lot of " out of " analysis , so , could we capture that with a different construal of grad a: , it 's a little it 's , we ' ve thought about it before , t to use the examples in other papers , and it 's a little complicated . cuz you 're like , it 's a state of there 's resource , grad a: right , and like , what is film , the state . you 're out of the state of having film , right ? and somehow film is standing for the re the resour the state of having some resource is just labeled as that resource . grad f: but and plus the fact that there 's also s , can you say , like , " the film ran out " or , maybe you could say something like " the film is out " grad f: so like the film went away from where it should be , namely with you , the the film is gone , i never really knew what was going on , i find it a little bit farfetched to say that " i ' m out of film " means that i have left the state of having film like that , professor e: or being out of something as , as . so " running out of it " definitely has a process aspect to it . grad c: is the d the final state of running out of something is being out of it . professor e: , so nob so no one has in of the , professional linguists , they have n't there was this whole thesis on " out of " . professor e: there was a , it may be just " out " . there was " over " but there was also a paper on " out " . professor e: ok . but anyway . we 're not gon na do that between now and next week . grad a: but . it 's not one of the y it 's more straightforward ones forward ones to defend , so you probably do n't want to use it for the purposes th these are you 're addressing like , computational linguists , right . or are you ? grad c: computer it 's more there 's going to be the just four computational linguists , by coincidence , but the rest is , whatever , biocomputing people and physicists . professor e: no no , but not for your talk . i ' m - we 're worrying about the th the thes grad a: , like so i would try to i would stay away from one that involves weird construal . grad c: is there a bakery around here . so if you c we really just construe it as a grad c: no , it 's the bakery itself is it a building ? , that you want to go to ? or is it something to eat that you want to buy ? professor e: no , no . the question is d do you wanna construe do you wanna constr - strue professor e: r exactly . it 's because do you wanna c do you want to view the bakery as a p a place that i , if y professor e: th , that 's one . you want to buy something . but the other is , yo you might have smelled a smell and are just curious about whether there 'd be a bakery in the neighborhood , or , pfff , you wonder how people here make their living , and there 're all sorts of reasons why you might be asking about the existence of a bakery that does n't mean , " i want to buy some baked goods . " those are interesting examples but it 's not clear that they 're mainly construal examples . professor e: so let 's so let 's think about this from the point of view of construal . so let 's first do a so the metonymy thing is probably the easiest and a and actually the though , the one you have is n't quite professor e: n no not that one , that 's a the background . this is the t , page five . grad a: ellipsis . like , " it " does n't refer to " thing , " it refers to acti , j thing standing for activ most relevant activity for a tourist you could think of it that way , grad c: , my argument here is it 's the same thing as " plato 's on the top shelf , " grad c: i ' m con , th that you can refer to a book of plato by using " plato , " grad c: and you can refer back to it , and so you can castles have as tourist sites , have admission fees , so you can say " where is the castle , how much does it cost ? " how far is it from here ? so , you 're also not referring to the width of the object , or so , www . professor e: ok . can we think of a metaphorical use of " where " in the tourist 's domain ? so it 's you can sometimes use " where " f for " when " professor e: in the sense of , where wh where was , where was heidelberg , in the thirty years ' war ? grad a: or like its developmental state like that , you could i you could get that . grad f: , there 's also things like , s i could ask something like " where can i find out about blah - blah " in a does n't nece i do n't necessarily have to care about the spatial location , just give me a phone number professor e: , " where could i learn its opening hours , " . but that 's not metaphorical . it 's another so we 're thinking about , or we could also think about , professor e: it i but it 's a state and the issue is , is that it may be just a usage , professor e: ! how about i , " i ' m in a state of exhaustion " ? professor e: a st , you can certainly say , , " i ' m in overload . " tu - stur tourists will often say that . grad a: there 're too there 're all sorts of fixed expressions i do n't like i ' m out of sorts now ! like " i ' m in trouble ! " grad c: i when , just f u the data that i ' ve looked at so far that rec , there 's tons of cases for polysemy . so , mak re making reference to buildings as institutions , as containers , as build , whatever . , so ib in mus , in museums , as a building or as something where pictures hang versus , ev something that puts on exhibits , grad a: why do n't you want to use any of those ? so y you do n't wanna use one that 's grad c: but the argument should be , can be made that , despite the fact that this is not the most met metaphorical domain , because people interacting with hti systems try to be straightforward and less lyrical , construal still is , , completely , key in terms of finding out any of these things , so , . professor e: so that 's a reasonable point , that it in this domain you 're gon na get less metaphor and more metonymy . grad c: we , i with a i looked with a student i looked at the entire database that we have on heidelberg for cases of metonymy . grad c: hardly anything . so not even in descriptions w did we find anything , relevant . grad c: maybe the " where is something " question as a whole , can be construed as , u i locational versus instructional request . grad a: instruction . , directions ? . , that was definitely treated as an example of construal . right ? grad c: but then you 're not on the lexical level , that 's one level higher . professor e: you know , where r what color was this in the nain nineteenth century . what was this p instead of wh what how was this painted , what color was this painted , was this alleyway open . grad c: , maybe we can include that also in our second , data run . we c we can show people pictures of objects and then have then ask the system about the objects and engage in conversation on the history and the art and the architecture and . professor e: ok . so why do n't we plan to give you feedback electronically . wish you a good trip . all success . grad d: for some reason when you said " feedback electronically " of that you ever see the simpsons where they 're like the family 's got the buzzers and they buzz each other when they do n't like what the other one is saying ? ###summary: the meeting was taken up by discussion about a thesis proposal and a talk about to take place at eml. the latter will present the work that is currently being done at icsi including examples of inference of user intentions and of the recordings of the on-going data collection. the talk will also outline the theoretical ( x-schemas , image schemas , bayes-nets ) and neural background. the thesis proposal , on the other hand , presents the idea of "construal" and makes claims as to how inferences are drawn in a probabilistic relational model by using information from the ontology , situation , user and discourse models. it was advised that more emphasis should be put on the role of construal in the understanding of metaphor and metonymy. base constructions deal with the norm , while further general domain mechanisms determine how the constructions are invoked depending on the context. several potential examples of polysemy were discussed in detail: "walk/run into" , "on the bus" , "out of film" , "where is x?". however , none of them was an example of lexical polysemy resolved by construal straightforward enough to include in the proposal; the tourist domain is not metaphor rich. as the talk at eml will also refer to a theoretical framework , it was suggested that along with presenting ntl and the piece on mirror neurons , it also alludes to relevant fmri work. the neural side of the research could be of interest to various groups. the language analysis itself will be introduced in terms of image schemas. on the other hand , it was arranged for more feedback on the thesis proposal to be sent by email. the latest version of the construction formalism will also be needed to complete the presentation. it was noted that the thesis proposal did not put any emphasis on metaphor or on the scalability achieved by combining constructions with general construal mechanisms: constructions cover base cases , while metaphoric and metonymic cases are resolved with the extra help provided by construal. during this discussion , however , the suggested examples of polysemy , which is tackled by the proposed framework , were not straightforward enough. it was agreed that metaphors are not abundant in the spatial domain. some of the candidates discussed were phrases like "on the bus" , "out of film" , non-spatial uses of "where" or other fixed expressions. the thesis proposal focuses on how construal works in the tourist domain. one can build a probabilistic relational model with domain general rules that define how ontology , situation , user and discourse models combine to infer intentions. the talk for eml includes a presentation of the motivation behind the project , as well as the work already done on smartkom ( parser , generator , synthesis ). furthermore , the data collection currently taking place and some preliminary observations are going to be outlined. all this is going to be given from the perspective of the general theoretical framework , ntl , with further explanations on x-schemas and also the embodied simulation approach.
2
professor b: test . let 's see . move it bit . test ? ok , i it 's alright . so , let 's see . , barry 's not here and dave 's not here . , say about just q just quickly to get through it , that dave and i submitted this asru . professor b: so . , it 's interesting . , we 're dealing with rever reverberation , and , when we deal with pure reverberation , the technique he 's using works really , really . , and when they had the reverberation here , we 'll measure the signal - to - noise ratio and it 's , about nine db . , professor b: and actually it brought up a question which may be relevant to the aurora too . , i know that when you figured out the filters that we 're using for the mel scale , there was some experimentation that went on at , at ogi . but one of the differences that we found between the two systems that we were using , the aurora htk system baseline system and the system that we were the , other system we were using , the sri system , was that the sri system had maybe a , hundred hertz high - pass . and the , aurora htk , it was like twenty . professor b: the edge is really , sixty - four ? for some reason , dave thought it was twenty , professor b: but do , h how far down it would be at twenty hertz ? what the how much rejection would there be at twenty hertz , let 's say ? phd d: twenty hertz frequency , it 's zero at twenty hertz , right ? the filter ? phd c: yea - actually , the left edge of the first filter is at sixty - four . professor b: it 's actually set to zero ? what filter is that ? is this , from the from phd c: it this is the filter bank in the frequency domain that starts at sixty - four . professor b: right . ok . so that 's a little different than dave thought , . but but , still , it 's possible that we 're getting in some more noise . so i wonder , is it @ was there their experimentation with , say , throwing away that filter ? and , phd d: , throwing away the first ? , we ' ve tried including the full bank . right ? from zero to four k . professor b: right , but the question is , whether sixty - four hertz is , too , low . phd d: , make it a hundred or so ? i t i ' ve tried a hundred and it was more or less the same , or slightly worse . professor b: mmm . that 'd be something to look at sometime because what , , he was looking at was performance in this room . professor b: would that be more like , you 'd think that 'd be more like speechdat - car , i , in terms of the noise . the speechdat - car is more , roughly stationary , a lot of it . and and ti - digits maybe is not so much as - . , maybe it 's not a big deal . but , anyway , that was just something we wondered about . but , , certainly a lot of the noise , is , below a hundred hertz . , the signal - to - noise ratio , looks a fair amount better if you high - pass filter it from this room . but it 's still pretty noisy . even even for a hundred hertz up , it 's still fairly noisy . the signal - to - noise ratio is actually still pretty bad . so , , the main the phd a: so that 's on th that 's on the f the far field ones though , right ? professor b: , we got a video projector in here , and , which we keep on during every session we record , which , i w we were aware of professor b: but we thought it was n't a bad thing . , that 's a noise source . , and there 's also the , air conditioning . which , , is a pretty low frequency thing . so , those are major components , professor b: but , it , i maybe i said this last week too but it really became apparent to us that we need to take account of noise . and , so when he gets done with his prelim study one of the next things we 'd want to do is to take this , noise , processing and , synthesize some speech from it . professor b: and then , in about , a little less than two weeks . , it might even be sooner . , let 's see , this is the sixteenth , seventeenth ? , i if he 's before it might even be in a week . professor b: so they do it right at the beginning of the semester . , that was one the overall results seemed to be first place in the case of either , artificial reverberation or a modest sized training set . , either way , i , it helped a lot . and but if you had a really big training set , a recognizer , system that was capable of taking advantage of a really large training set that one thing with the htk is that is has the as we 're using the configuration we 're using is w s is being bound by the terms of aurora , we have all those parameters just set as they are . so even if we had a hundred times as much data , we would n't go out to , ten or t or a hundred times as many gaussians or anything . , it 's hard to take advantage of big chunks of data . , whereas the other one does expand as you have more training data . professor b: it does it automatically , actually . and so , that one really benefited from the larger set . and it was also a diverse set with different noises and . , so , that seemed to be so , if you have that better recognizer that can build up more parameters , and if you , have the natural room , which in this case has a p a pretty bad signal - to - noise ratio , then in that case , the right thing to do is just do u use speaker adaptation . and and not bother with this acoustic , processing . but that would not be true if we did some explicit noise - processing as , the convolutional things we were doing . that 's what we found . phd a: i , started working on the mississippi state recognizer . so , i got in touch with joe and , from your email and things like that . phd a: and he gave me all of the pointers and everything that i needed . and so i downloaded the , there were two things , that they had to download . one was the , i the software . and another wad was a , like a sample run . so i downloaded the software and compiled all of that . and it compiled fine . phd a: and so i have n't grabbed the latest one that he just , put out yet . phd a: so . , but , the software seemed to compile fine and everything , so . and , professor b: is there any word yet about the issues about , adjustments for different feature sets or anything ? phd a: no , i d you asked me to write to him and i forgot to ask him about that . or if i did ask him , he did n't reply . i do n't remember yet . , i 'll d i 'll double check that and ask him again . professor b: it 's like that could r turn out to be an important issue for us . phd d: cuz they have , already frozen those in i insertion penalties and all those is what i feel . because they have this document explaining the recognizer . and they have these tables with , various language model weights , insertion penalties . phd d: and , on that , they have run some experiments using various insertion penalties and all those phd d: , p the one that they have reported is a nist evaluation , wall street journal . phd d: no . so they 're , like so they are actually trying to , fix that those values using the clean , training part of the wall street journal . which is , the aurora . aurora has a clean subset . , they want to train it and then this they 're going to run some evaluations . professor b: so they 're set they 're setting it based on that ? so now , we may come back to the situation where we may be looking for a modification of the features to account for the fact that we ca n't modify these parameters . but it 's still worth , just since , just chatting with joe about the issue . phd a: , do you think that 's something i should just send to him or do you think i should send it to this there 's an a m a mailing list . professor b: , it 's not a secret . , we 're , certainly willing to talk about it with everybody , but i think that , it 's probably best to start talking with him just to @ , it 's a dialogue between two of you about what , what does he think about this and what could be done about it . if you get ten people in involved in it there 'll be a lot of perspectives based on , how professor b: but , it all should come up eventually , but if there is any , way to move in a way that would , be more open to different kinds of features . but if , if there is n't , and it 's just shut down and then also there 's probably not worthwhile bringing it into a larger forum where political issues will come in . phd d: so this is now it 's compiled under solaris ? because he there was some mail r saying that it 's may not be stable for linux and all those . professor b: i noticed , just glancing at the , hopkins workshop , web site that , one of the thing i , we 'll see how much they accomplish , but one of the things that they were trying to do in the graphical models thing was to put together a , tool kit for doing , r , arbitrary graphical models for , speech recognition . so and jeff , the two jeffs were professor b: , he was here for a couple years and he , got his phd . he and he 's , been at ibm for the last couple years . professor b: , so he did his phd on dynamic bayes - nets , for speech recognition . he had some continuity built into the model , presumably to handle some , inertia in the production system , and , phd c: , i ' ve been playing with , first , the , vad . , so it 's exactly the same approach , but the features that the vad neural network use are , mfcc after noise compensation . , i have the results . phd c: this is what we get after this so , actually , we , here the features are noise compensated and there is also the lda filter . , and then it 's a pretty small neural network which use , nine frames of six features from c - zero to c - fives , plus the first derivatives . and it has one hundred hidden units . professor b: two outputs . so i about eleven thousand parameters , which actually should n't be a problem , even in small phones . phd c: it should be ok . so the previous syst it 's based on the system that has a fifty - three point sixty - six percent improvement . it 's the same system . the only thing that changed is the n a p a es the estimation of the silence probabilities . which now is based on , cleaned features . professor b: and , it 's a l it 's a lot better . that 's great . phd c: so it 's not bad , but the problem is still that the latency is too large . phd c: because the latency of the vad is two hundred and twenty milliseconds . and , the vad is used , i for on - line normalization , and it 's used before the delta computation . so if you add these components it goes t to a hundred and seventy , right ? professor b: i ' m confused . you started off with two - twenty and you ended up with one - seventy ? professor b: so it 's two - twenty . i the is this are these twenty - millisecond frames ? is that why ? is it after downsampling ? or phd c: the two - twenty is one hundred milliseconds for the no , it 's forty milliseconds for t for the , cleaning of the speech . then there is , the neural network which use nine frames . so it adds forty milliseconds . phd c: after that , you have the , filtering of the silence probabilities . which is a million filter it , and it creates a one hundred milliseconds delay . so , phd c: , there are twenty that comes from there is ten that comes from the lda filters also . right ? , so it 's two hundred and ten , phd d: if you are phrasing f using three frames , it is thirty here for delta . phd d: so five frames , that 's twenty . so it 's who un two hundred and ten . professor b: forty for the i n ann , a hundred for the smoothing . , but at ten , phd d: at th { nonvocalsound } at the input . , that 's at the input to the net . phd d: and there i so it 's like s five , six cepstrum plus delta at nine frames of professor b: ten milliseconds for lda filter , and t and ten another ten milliseconds you said for the frame ? phd c: for the frame i . i computed two - twenty , it 's i it 's for the fr the phd c: so this is the features that are used by our network and then afterwards , you have to compute the delta on the , main feature stream , which is , delta and double - deltas , which is fifty milliseconds . professor b: no , the after the noise part , the forty the other hundred and eighty , a minute . some of this is , is in parallel , is n't it ? , the lda , you have the lda as part of the v d - , vad ? or professor b: , it does ? so in that case there is n't too much in parallel . phd c: but , we could probably put the delta , before on - line normalization . it should not that make a big difference , phd a: could that help a little bit ? , i there 's a lot of things you could do to professor b: so if you put the delta before the , ana on - line if then it could go in parallel . phd c: cuz the time constant of the on - line normalization is pretty long compared to the delta window , so . it should not make professor b: and you ought to be able to shove tw , sh pull off twenty milliseconds from somewhere else to get it under two hundred , right ? professor b: the hundred milla mill a hundred milliseconds for smoothing is an arbitrary amount . it could be eighty and probably do @ phd a: i a hun wh - what 's the baseline you need to be under ? two hundred ? professor b: , we . they 're still arguing about it . , if it 's two if it 's , if it 's two - fifty , then we could keep the delta where it is if we shaved off twenty . if it 's two hundred , if we shaved off twenty , we could , meet it by moving the delta back . phd a: so , how do that what you have is too much if they 're still deciding ? professor b: , we do n't , but it 's just , the main thing is that since that we got burned last time , and , by not worrying about it very much , we 're just staying conscious of it . professor b: and so , th , if a week before we have to be done someone says , " , you have to have fifty milliseconds less than you have now " , it would be pretty frantic around here . phd a: but still , that 's a pretty big , win . and it does n't seem like you 're in terms of your delay , you 're , that professor b: he added a bit on , i , because before we were had were able to have the noise , , and the lva be in parallel . and now he 's requiring it to be done first . phd c: , but the main thing , maybe , is the cleaning of the speech , which takes forty milliseconds or so . phd d: , the lda we , is , like is it very crucial for the features , right ? phd c: this is the first try . , i maybe the lda 's not very useful then . professor b: , you have twenty for delta computation which y now you 're doing twice , but yo w were you doing that before ? phd c: , in the proposal , the input of the vad network were just three frames , . professor b: so , what you have now is fort , forty for the noise , twenty for the delta , and ten for the lda . that 's seventy milliseconds of which was formerly in parallel , that 's the difference as far as the timing , and you could experiment with cutting various pieces of these back a bit , we 're s we 're not in terrible shape . phd a: where where is this fifty - seven point o two in comparison to the last evaluation ? phd d: so this is like the first proposal . the proposal - one . it was forty - four , actually . professor b: and we still do n't have the neural net in . so so it 's so it 's we 're we 're doing better . professor b: , we 're getting better recognition . , i ' m other people working on this are not sitting still either , but , the important thing is that we learn how to do this better , and , . so , our , you can see the kind of numbers that we 're having , say , on speechdat - car which is a hard task , cuz it 's really , it 's just sort of reasonable numbers , starting to be . , it 's still terri phd c: , even for a - matched case it 's sixty percent error rate reduction , which is phd c: so actually , this is in between what we had with the previous vad and what sunil did with an idl vad . which gave sixty - two percent improvement , right ? phd c: so , if you use , like , an idl vad , for dropping the frames , phd c: the best that we can get i that means that we estimate the silence probability on the clean version of the utterances . then you can go up to sixty - two percent error rate reduction , globally . mmm phd a: so that would be even that would n't change this number down here to sixty - two ? phd c: if you add a g good v very good vad , that works as a vad working on clean speech , then you wou you would go professor b: probably . so fi si fifty - three is what you were getting with the old vad . and sixty - two with the , quote , unquote , cheating vad . and fifty - seven is what you got with the real vad . phd c: , the next thing is , i started to play , i do n't want to worry too much about the delay , no . maybe it 's better to for the decision from the committee . , but i started to play with the , , tandem neural network . did the configuration that 's very similar to what we did for the february proposal . so . there is a f a first feature stream that use straight mfcc features . , these features actually . and the other stream is the output of a neural network , using as input , also , these , cleaned mfcc . phd c: so there is just this feature stream , the fifteen mfcc plus delta and double - delta . phd c: , so it 's makes forty - five features that are used as input to the htk . and then , there is there are more inputs that comes from the tandem mlp . professor b: h he likes to use them both , cuz then it has one part that 's discriminative , one part that 's not . phd c: so , , . right now it seems that i tested on speechdat - car while the experiment are running on your on ti - digits . , it improves on the - matched and the mismatched conditions , but it get worse on the highly mismatched . phd c: like , on the - match and medium mismatch , the gain is around five percent relative , but it goes down a lot more , like fifteen percent on the hm case . professor b: so it 's you have seventy - three features , and you 're just feeding them like that . there is n't any klt or anything ? phd a: that 's how you get down to twenty - eight ? why twenty - eight ? phd c: i . it 's i it 's because it 's what we did for the first proposal . we tested , trying to go down phd c: but we have to for , we have to go down , because the limit is now sixty features . we have to find a way to decrease the number of features . phd a: so , it seems funny that i , maybe i do n't u quite understand everything , but that adding features i if you 're keeping the back - end fixed . maybe that 's it . because it seems like just adding information should n't give worse results . but i if you 're keeping the number of gaussians fixed in the recognizer , then professor b: , . but , just in general , adding information suppose the information you added , was a really terrible feature and all it brought in was noise . right ? so so , or or suppose it was n't completely terrible , but it was completely equivalent to another one feature that you had , except it was noisier . in that case you would n't necessarily expect it to be better . phd a: , i was n't necessarily saying it should be better . i ' m just surprised that you 're getting fifteen percent relative worse on the wel professor b: so , " highly mismatched condition " means that your training is a bad estimate of your test . so having , a g a l a greater number of features , if they are n't maybe the right features that you use , certainly can e can easily , make things worse . , you 're right . if you have if you have , lots and lots of data , and you have and your training is representative of your test , then getting more sources of information should just help . but but it 's it does n't necessarily work that way . so i wonder , what 's your thought about what to do next with it ? phd c: i ' m surprised , because i expected the neural net to help more when there is more mismatch , as it was the case for the phd c: , it 's the same training set , so it 's timit with the ti - digits ' , noises , added . professor b: , we might have to experiment with , better training sets . again . i the other thing is , before you found that was the best configuration , but you might have to retest those things now that we have different the rest of it is different , what 's the effect of just putting the neural net on without the o other path ? , what the straight features do . that gives you this . what it does in combination . you do n't necessarily phd a: what if you did the would it make sense to do the klt on the full set of combined features ? instead of just on the phd c: i g i . the reason i did it this ways is that in february , it we tested different things like that , so , having two klt , having just a klt for a network , or having a global klt . phd a: , i see . so you tried the global klt before and it did n't really phd c: and , th the differences between these configurations were not huge , but it was marginally better with this configuration . professor b: but , that 's another thing to try , since things are different . and i if the these are all so all of these seventy - three features are going into , the hmm . and is are i are any deltas being computed of tha of them ? phd c: , maybe we can add some context from these features also as dan did in his last work . professor b: could . i but the other thing i was thinking was , now i lost track of what i was thinking . but . phd a: what is the you said there was a limit of sixty features ? what 's the relation between that limit and the , forty - eight , forty eight hundred bits per second ? phd c: and generally , i it s allows you to transmit like , fifteen , cepstrum . professor b: the issue was that , this is supposed to be a standard that 's then gon na be fed to somebody 's recognizer somewhere which might be , it might be a concern how many parameters are use u used and . they felt they wanted to set a limit . so they chose sixty . some people wanted to use hundreds of parameters and that bothered some other people . u and so they just chose that . i it 's r arbitrary too . but but that 's what was chosen . i remembered what i was going to say . what i was going to say is that , maybe with the noise removal , these things are now more correlated . so you have two sets of things that are uncorrelated , within themselves , but they 're pretty correlated with one another . they 're being fed into these , variants , only gaussians and , so maybe it would be a better idea now than it was before to , have , one klt over everything , to de - correlate it . professor b: so we found this , this macrophone data , and , that we were using for these other experiments , to be pretty good . so that 's i after you explore these other alternatives , that might be another way to start looking , is just improving the training set . , we were getting , lots better recognition using that , than , you do have the problem that , u i we are not able to increase the number of gaussians , or anything to , to match anything . so we 're only improving the training of our feature set , but that 's still probably something . phd a: so you 're saying , add the macrophone data to the training of the neural net ? the tandem net ? professor b: that 's the only place that we can train . we ca n't train the other with anything other than the standard amount , so . professor b: , you did experiments back then where you made it bigger and it and that was the threshold point . much less than that , it was worse , and much more than that , it was n't much better . phd d: so is it though the performance , big relation in the high ma high mismatch has something to do with the , cleaning up that you that is done on the timit after adding noise ? so it 's i all the noises are from the ti - digits , right ? so you i phd d: , it 's like the high mismatch of the speechdat - car after cleaning up , maybe having more noise than the training set of timit after clean s after you do the noise clean - up . , earlier you never had any compensation , you just trained it straight away . so it had like all these different conditions of s n rs , actually in their training set of neural net . but after cleaning up you have now a different set of s n rs , right ? for the training of the neural net . and is it something to do with the mismatch that 's created after the cleaning up , like the high mismatch phd c: you mean the most noisy occurrences on speechdat - car might be a lot more noisy than professor b: so right . so the training the neural net is being trained with noise compensated . professor b: which makes sense , but , you 're saying , the noisier ones are still going to be , even after our noise compensation , are still gon na be pretty noisy . phd d: , so now the after - noise compensation the neural net is seeing a different set of s n rs than that was originally there in the training set . of timit . because in the timit it was zero to some clean . phd d: so the net saw all the snr @ conditions . now after cleaning up it 's a different set of snr . and that snr may not be , like , com covering the whole set of s n rs that you 're getting in the speechdat - car . professor b: but the speechdat - car data that you 're seeing is also reduced in noise by the noise compensation . phd d: , it is . but , i ' m saying , there could be some issues of phd c: , if the initial range of snr is different , we the problem was already there before . professor b: , it depends on whether you believe that the noise compensation is equally reducing the noise on the test set and the training set . professor b: , you 're saying there 's a mismatch in noise that was n't there before , but if they were both the same before , then if they were both reduic reduced equally , then , there would not be a mismatch . , this may be heaven forbid , this noise compensation process may be imperfect , so maybe it 's treating some things differently . phd d: , i . that could be seen from the ti - digits , testing condition because , the noises are from the ti - digits , right ? noise so cleaning up the ti - digits and if the performance goes down in the ti - digits mismatch high mismatch like this professor b: , one of the things about , the macrophone data , , it was recorded over many different telephones . so , there 's lots of different kinds of acoustic conditions . , it 's not artificially added noise or anything . so it 's not the same . i do n't think there 's anybody recording over a car from a car , but it 's varied enough that if doing this adjustments , and playing around with it does n't , make it better , the most , it seems like the most obvious thing to do is to improve the training set . , what we were the condition it it gave us an enormous amount of improvement in what we were doing with meeting recorder digits , even though there , again , these m macrophone digits were very , very different from , what we were going on here . , we were n't talking over a telephone here . but it was just having a variation in acoustic conditions was just a good thing . phd c: , actually to s , what i observed in the hm case is that the number of deletion dramatically increases . it it doubles . phd c: when i added the num the neural network it doubles the number of deletions . , so i do n't how to interpret that , but , mmm phd a: and and did an other numbers stay the same ? insertion substitutions stay the same ? phd c: they maybe they are a little bit , lower . they are a little bit better . but professor b: did they increase the number of deletions even for the cases that got better ? say , for the , it professor b: so it 's only the highly mismatched ? and it remind me again , the " highly mismatched " means that the phd c: it 's clean training , close microphone training and distant microphone , high speed , . phd c: but , but without the neural network it 's , it 's better . it 's just when we add the neural networks . phd a: that says that , the models in , the recognizer are really paying attention to the neural net features . professor b: actually { nonvocalsound } the timit noises are a range of noises and they 're not so much the stationary driving noises , right ? it 's it 's pretty different . is n't it ? phd c: , there is a car noise . so there are f just four noises . , " car " , babble , phd c: train station , . so it 's mostly , " car " is stationary , babble , it 's a stationary background plus some voices , some speech over it . and the other two are rather stationary also . professor b: , that if you run it actually , you maybe you remember this . when you in the old experiments when you ran with the neural net only , and did n't have this side path , , with the pure features as , did it make things better to have the neural net ? was it about the same ? , w i professor b: until you put the second path in with the pure features , the neural net was n't helping . , that 's interesting . phd c: it was helping , if the features are b were bad , just plain p l ps or m f c cs . as soon as we added lda on - line normalization , and all these things , then professor b: they were doing similar enough things . , i still think it would be k interesting to see what would happen if you just had the neural net without the side thing . and and the thing i have in mind is , maybe you 'll see that the results are not just a little bit worse . maybe that they 're a lot worse . ? but if on the ha other hand , it 's , say , somewhere in between what you 're seeing now and , what you 'd have with just the pure features , then maybe there is some problem of a , combination of these things , or correlation between them somehow . if it really is that the net is hurting you at the moment , then the issue is to focus on , improving the net . so what 's the overall effe , you have n't done all the experiments but you said it was i somewhat better , say , five percent better , for the first two conditions , and fifteen percent worse for the other one ? but it 's but that one 's weighted lower , phd c: i d it 's it was one or two percent . that 's not that bad , but it was l like two percent relative worse on speechdat - car . i have to check that . , i have i will . phd d: , it will overall it will be still better even if it is fifteen percent worse , because the fifteen percent worse is given like f w twenty - five point two five eight . professor b: so the so the worst it could be , if the others were exactly the same , is four , phd d: , so it 's four . is i so either it 'll get cancelled out , or you 'll get , like , almost the same . phd a: , i ' ve been wondering about something . in the , a lot of the , the hub - five systems , recently have been using lda . and they , they run lda on the features right before they train the models . so there 's the lda is right there before the h m so , you guys are using lda but it seems like it 's pretty far back in the process . phd d: , this lda is different from the lda that you are talking about . the lda that you saying is , like , you take a block of features , like nine frames , and then do an lda on it , and then reduce the dimensionality to something like twenty - four like that . phd d: so this is a two dimensional tile . and the lda that we are f applying is only in time , not in frequency high cost frequency . so it 's like more like a filtering in time , rather than doing a r phd a: . ok . so what i what about , i u what i w , i if this is a good idea or not , but what if you put ran the other lda , on your features right before they go into the hmm ? phd c: what do we do with the ann is something like that except that it 's not linear . but it 's like a nonlinear discriminant analysis . phd a: i g but , w but the other features that you have , th the non - tandem ones , phd c: , i know . that that . , in the proposal , they were transformed u using pca , but , it might be that lda could be better . professor b: the a the argument i is i in and it 's not like we really know , but the argument anyway is that , , we always have the prob , discriminative things are good . lda , neural nets , they 're good . , they 're good because you learn to distinguish between these categories that you want to be good at distinguishing between . and pca does n't do that . it pac - pca low - order pca throws away pieces that are , maybe not gon na be helpful just because they 're small , . but , the problem is , training sets are n't perfect and testing sets are different . so you f you face the potential problem with discriminative , be it lda or neural nets , that you are training to discriminate between categories in one space but what you 're really gon na be g getting is something else . and so , stephane 's idea was , let 's feed , both this discriminatively trained thing and something that 's not . so you have a good set of features that everybody 's worked really hard to make , and then , you discriminately train it , but you also take the path that does n't have that , and putting those in together . and that seem so it 's like a combination of the , what , dan has been calling , a feature combination versus posterior combination . it 's it 's , you have the posterior combination but then you get the features from that and use them as a feature combination with these other things . and that seemed , at least in the last one , as he was just saying , he when he only did discriminative , i it actually was it did n't help in this particular case . there was enough of a difference , i , between the testing and training . but by having them both there the fact is some of the time , the discriminative is gon na help you . and some of the time it 's going to hurt you , and by combining two information sources if , if phd a: so you would n't necessarily then want to do lda on the non - tandem features because now you 're doing something to them that professor b: now , again , it 's we 're just trying these different things . we do n't really 's gon na work best . but if that 's the hypothesis , at least it would be counter to that hypothesis to do that . and in principle you would think that the neural net would do better at the discriminant part than lda . phd a: exactly . , we were getting ready to do the tandem , for the hub - five system , and , andreas and i talked about it , and the idea w the thought was , " , , that i th the neural net should be better , but we should at least have , a number , to show that we did try the lda in place of the neural net , so that we can , show a clear path . , that you have it without it , then you have the lda , then you have the neural net , and you can see , theoretically . i was just wondering i phd a: that 's what that 's what we 're gon na do next as soon as i finish this other thing . professor b: no , no , but it might not even be true . , it 's it 's a great idea . , one of the things that always disturbed me , in the resurgence of neural nets that happened in the eighties was that , a lot of people because neural nets were pretty easy to use a lot of people were just using them for all sorts of things without , looking into the linear , versions of them . and , people were doing recurrent nets but not looking at iir filters , and , , so , it 's definitely a good idea to try it . phd a: and everybody 's putting that on their systems now , and so , i that 's what made me wonder about this , professor b: , they ' ve been putting them in their systems off and on for ten years , phd a: , what is it 's like in the hub - five evaluations , and you read the system descriptions and everybody 's got , lda on their features . phd c: it 's the transformation they 're estimating on , they are trained on the same data as the final hmm are . phd a: , so it 's different . exactly . cuz they do n't have these , mismatches that you guys have . so that 's why i was wondering if maybe it 's not even a good idea . i enough about it , professor b: , part of why you were getting into the klt y you were describing to me at one point that you wanted to see if , , getting good orthogonal features was and combining the different temporal ranges was the key thing that was happening or whether it was this discriminant thing , right ? so you were just trying you r , this is it does n't have the lda aspect but th as far as the orthogonalizing transformation , you were trying that at one point , right ? you were . does something . it does n't work as . phd d: so , i ' ve been exploring a parallel vad without neural network with , like , less latency using snr and energy , after the cleaning up . so what i 'd been trying was , after the b after the noise compensation , n i was trying t to f find a f feature based on the ratio of the energies , that is , cl after clean and before clean . so that if they are , like , pretty c close to one , which means it 's speech . and if it is n if it is close to zero , which is so it 's like a scale @ probability value . so i was trying , with full band and multiple bands , m ps separating them to different frequency bands and deriving separate decisions on each bands , and trying to combine them . the advantage being like it does n't have the latency of the neural net if it can g and it gave me like , one point one more than one percent relative improvement . so , from fifty - three point six it went to fifty f four point eight . so it 's , like , only slightly more than a percent improvement , just like which means that it 's doing a slightly better job than the previous vad , at a l lower delay . so , phd d: so d with the delay , that 's gone is the input , which is the sixty millisecond . the forty plus twenty . at the input of the neural net you have this , f nine frames of context plus the delta . phd d: so that delay , plus the lda . , so the delay is only the forty millisecond of the noise cleaning , plus the hundred millisecond smoothing at the output . so the di the biggest the problem f for me was to find a consistent threshold that works across the different databases , because i t i try to make it work on tr speechdat - car and it fails on ti - digits , or if i try to make it work on that it 's just the italian , it does n't work on the finnish . so , so there are there was , like , some problem in balancing the deletions and insertions when i try different thresholds . the i ' m still trying to make it better by using some other features from the after the p clean up maybe , some , correlation auto - correlation or some s additional features of to mainly the improvement of the vad . i ' ve been trying . professor b: now this , " before and after clean " , it sounds like you think that 's a good feature . that that , it you th think that the , the i it appears to be a good feature , right ? what about using it in the neural net ? phd d: so that 's the so we ' ve been thinking about putting it into the neural net also . because they did that itself professor b: so if we can live with the latency or cut the latencies elsewhere , then that would be a , good thing . , anybody has anybody you guys or naren , somebody , tried the , , second th second stream thing ? phd d: , i just h put the second stream in place and , ran one experiment , but just like just to know that everything is fine . so it was like , forty - five cepstrum plus twenty - three mel log mel . and and , just , like , it gave me the baseline performance of the aurora , which is like zero improvement . so tried it on italian just to know that everything is but i did n't export anything out of it because it was , like , a weird feature set . so . professor b: , what , would be more what you 'd want to do is , put it into another neural net . professor b: , we 're not quite there yet . so we have to figure out the neural nets , i . phd d: the , other thing i was wondering was , if the neural net , has any because of the different noise con unseen noise conditions for the neural net , where , like , you train it on those four noise conditions , while you are feeding it with , like , a additional some four plus some f few more conditions which it has n't seen , actually , from the f while testing . phd d: instead of just h having c , those cleaned up t cepstrum , sh should we feed some additional information , like the the we have the vad flag . , should we f feed the vad flag , also , at the input so that it has some additional discriminating information at the input ? phd d: we have the vad information also available at the back - end . so if it is something the neural net is not able to discriminate the classes because most of it is sil , we have dropped some silence f we have dropped so silence frames ? no , we have n't dropped silence frames still . phd d: the b biggest classification would be the speech and silence . so , by having an additional , feature which says " this is speech and this is nonspeech " , it certainly helps in some unseen noise conditions for the neural net . phd d: , we have we are transferring the vad to the back - end feature to the back - end . because we are dropping it at the back - end after everything all the features are computed . phd d: so the neural so that is coming from a separate neural net or some vad . which is which is certainly giving a professor b: you could feed it into the neural net . the other thing you could do is just , p modify the , output probabilities of the , , neural net , tandem neural net , based on the fact that you have a silence probability . so you have an independent estimator of what the silence probability is , and you could multiply the two things , and renormalize . , you 'd have to do the nonlinearity part and deal with that . , go backwards from what the nonlinearity would , would be . phd a: but in principle would n't it be better to feed it in ? and let the net do that ? professor b: , u not . let 's put it this way . , y you have this complicated system with thousands and thousand parameters and you can tell it , " learn this thing . " or you can say , " it 's silence ! go away ! " , i does n't ? i think the second one sounds a lot more direct . phd a: what if you so , what if you then , since this , what if you only use the neural net on the speech portions ? professor b: but it 's @ , no , you want to train on the nonspeech also , because that 's part of what you 're learning in it , to generate , that it 's it has to distinguish between . phd a: but , if you 're gon na if you 're going to multiply the output of the net by this other decision , would then you do n't care about whether the net makes that distinction , right ? professor b: but this other thing is n't perfect . so that you bring in some information from the net itself . professor b: now the only thing that bothers me about all this is that i the the fact i it 's bothersome that you 're getting more deletions . phd c: so i might maybe look at , is it due to the fact that , the probability of the silence at the output of the network , is , phd d: it may not be it may be too it 's too high in a sense , like , everything is more like a , flat probability . phd d: so , like , it 's not really doing any distinction between speech and nonspeech or , different among classes . phd a: be interesting to look at the for the i wonder if you could do this . but if you look at the , highly mism high mismat the output of the net on the high mismatch case and just look at , the distribution versus the other ones , do you see more peaks ? phd c: it it seems that the vad network does n't , it does n't drop , too many frames because the dele the number of deletion is reasonable . but it 's just when we add the tandem , the final mlp , and then professor b: now the only problem is you do n't want to ta i for the output of the vad before you can put something into the other system , professor b: cuz that 'll shoot up the latency a lot , am i missing something here ? so that 's maybe a problem with what i was just saying . but i phd a: but if you were gon na put it in as a feature it means you already have it by the time you get to the tandem net , right ? phd d: . we w we do n't have it , actually , because it 's it has a high rate energy the vad has a professor b: it 's done in , some of the things are , not in parallel , but certainly , it would be in parallel with the with a tandem net . in time . so maybe , if that does n't work , but it would be interesting to see if that was the problem , anyway . and and then i another alternative would be to take the feature that you 're feeding into the vad , and feeding it into the other one as . and then maybe it would just learn it better . but that 's , that 's an interesting thing to try to see , if what 's going on is that in the highly mismatched condition , it 's , causing deletions by having this silence probability up too high , at some point where the vad is saying it 's actually speech . which is probably true . professor b: cuz , the v a if the vad said since the vad is right a lot , anyway . might be . , we just started working with it . but these are some good ideas . phd c: , and the other thing , there are other issues maybe for the tandem , like , , do we want to , w n do we want to work on the targets ? or , like , instead of using phonemes , using more context dependent units ? phd c: , i ' m thinking , also , a w about dan 's work where he trained a network , not on phoneme targets but on the hmm state targets . it was giving s slightly better results . professor b: problem is , if you are going to run this on different m test sets , including large vocabulary , phd c: i was just thinking maybe about , like , generalized diphones , and come up with a reasonable , not too large , set of context dependent units , and then anyway we would have to reduce this with the klt . professor b: , maybe . but i d it i it 's all worth looking at , but it sounds to me like , looking at the relationship between this and the speech noise is probably a key thing . that and the correlation between . phd a: so if , if the , high mismatch case had been more like the , the other two cases in terms of giving you just a better performance , how would this number have changed ? professor b: , we what 's it 's gon na be the ti - digits yet . he has n't got the results back yet . phd c: if you extrapolate the speechdat - car - matched and medium - mismatch , it 's around , maybe five . phd c: , it 's around five percent , because it 's if everything is five percent . phd c: i d have the speechdat - car right now , so it 's running it shou we should have the results today during the afternoon , professor b: , i ' m leaving next wednesday . may or may not be in the morning . i leave in the afternoon . professor b: i ' m talking about next week . i ' m leaving next wednesday . this afternoon , right , for the meeting meeting ? , that 's just cuz of something on campus . so next week i wo n't , and the week after i wo n't , cuz i 'll be in finland . and the week after that i wo n't . by that time you 'll be , you 'll both be gone from here . so there 'll be no definitely no meeting on september sixth . professor b: so , sunil will be in oregon . , stephane and i will be in denmark . so it 'll be a few weeks , really , before we have a meeting of the same cast of characters . i , just , you guys should probably meet . and maybe barry will be around . and and then , we 'll start up again with dave and barry and stephane and us on the , twentieth . thirteenth ? about a month ? professor b: i ' m gone for two and a half weeks starting next we d - late next wednesday . phd a: so that 's you wo n't be at the next three of these meetings . is that right ? professor b: , i wo n't it 's probably four because of is it three ? let 's see , twenty - third , thirtieth , sixth . that 's right , next three . and the third one wo n't probably wo n't be a meeting , cuz , su - sunil , stephane , and i will all not be here . professor b: mmm . so it 's just , the next two where there will be there , may as be meetings , but wo n't be at them . and then starting up on the thirteenth , { nonvocalsound } , we 'll have meetings again but we 'll have to do without sunil here somehow . phd d: p s it 's like , it 's tentatively all full . , that 's a proposed date , i .
icsi's meeting recorder group at berkeley meets to discuss , for the most part , progress on the aurora project. the main areas being worked on were the voice activity detector and the tandem data streams. the group discussed possible further investigations that arose from these areas , including better linking the two. they also consider how aspects of an absent member's work might be applied to the current project. the meeting closed with a discussion of upcoming absences , and how meetings would continue. speaker me018 must confirm what is needed to work with the new software in terms of adjustments with someone further up the project chain. the system at it's current stage employs the neural networks and second stream , but the group leader would like the network investigated separately , incase it is hurting performance. there are worries regarding the need to make adjustments so the new software can handle the group's different feature set. the system , whilst improved , also has increased latency , and while the limit has not been set , the group need to reduce it. likewise the number of features the use in their system , since this has been set at an arbitrarily low value. there has been an increase in the number of deletion in the errors , which is of some concern. speaker mn007 has been implementing a new voice activity detector on noise compensated data , and it performs much better. he has also been working on the tandem neural network. speaker me013 , along with a student , submitted work on reverberation for a speech workshop. speaker me018 has downloaded and compiled the software he was asked to work with in the previous meeting.
###dialogue: professor b: test . let 's see . move it bit . test ? ok , i it 's alright . so , let 's see . , barry 's not here and dave 's not here . , say about just q just quickly to get through it , that dave and i submitted this asru . professor b: so . , it 's interesting . , we 're dealing with rever reverberation , and , when we deal with pure reverberation , the technique he 's using works really , really . , and when they had the reverberation here , we 'll measure the signal - to - noise ratio and it 's , about nine db . , professor b: and actually it brought up a question which may be relevant to the aurora too . , i know that when you figured out the filters that we 're using for the mel scale , there was some experimentation that went on at , at ogi . but one of the differences that we found between the two systems that we were using , the aurora htk system baseline system and the system that we were the , other system we were using , the sri system , was that the sri system had maybe a , hundred hertz high - pass . and the , aurora htk , it was like twenty . professor b: the edge is really , sixty - four ? for some reason , dave thought it was twenty , professor b: but do , h how far down it would be at twenty hertz ? what the how much rejection would there be at twenty hertz , let 's say ? phd d: twenty hertz frequency , it 's zero at twenty hertz , right ? the filter ? phd c: yea - actually , the left edge of the first filter is at sixty - four . professor b: it 's actually set to zero ? what filter is that ? is this , from the from phd c: it this is the filter bank in the frequency domain that starts at sixty - four . professor b: right . ok . so that 's a little different than dave thought , . but but , still , it 's possible that we 're getting in some more noise . so i wonder , is it @ was there their experimentation with , say , throwing away that filter ? and , phd d: , throwing away the first ? , we ' ve tried including the full bank . right ? from zero to four k . professor b: right , but the question is , whether sixty - four hertz is , too , low . phd d: , make it a hundred or so ? i t i ' ve tried a hundred and it was more or less the same , or slightly worse . professor b: mmm . that 'd be something to look at sometime because what , , he was looking at was performance in this room . professor b: would that be more like , you 'd think that 'd be more like speechdat - car , i , in terms of the noise . the speechdat - car is more , roughly stationary , a lot of it . and and ti - digits maybe is not so much as - . , maybe it 's not a big deal . but , anyway , that was just something we wondered about . but , , certainly a lot of the noise , is , below a hundred hertz . , the signal - to - noise ratio , looks a fair amount better if you high - pass filter it from this room . but it 's still pretty noisy . even even for a hundred hertz up , it 's still fairly noisy . the signal - to - noise ratio is actually still pretty bad . so , , the main the phd a: so that 's on th that 's on the f the far field ones though , right ? professor b: , we got a video projector in here , and , which we keep on during every session we record , which , i w we were aware of professor b: but we thought it was n't a bad thing . , that 's a noise source . , and there 's also the , air conditioning . which , , is a pretty low frequency thing . so , those are major components , professor b: but , it , i maybe i said this last week too but it really became apparent to us that we need to take account of noise . and , so when he gets done with his prelim study one of the next things we 'd want to do is to take this , noise , processing and , synthesize some speech from it . professor b: and then , in about , a little less than two weeks . , it might even be sooner . , let 's see , this is the sixteenth , seventeenth ? , i if he 's before it might even be in a week . professor b: so they do it right at the beginning of the semester . , that was one the overall results seemed to be first place in the case of either , artificial reverberation or a modest sized training set . , either way , i , it helped a lot . and but if you had a really big training set , a recognizer , system that was capable of taking advantage of a really large training set that one thing with the htk is that is has the as we 're using the configuration we 're using is w s is being bound by the terms of aurora , we have all those parameters just set as they are . so even if we had a hundred times as much data , we would n't go out to , ten or t or a hundred times as many gaussians or anything . , it 's hard to take advantage of big chunks of data . , whereas the other one does expand as you have more training data . professor b: it does it automatically , actually . and so , that one really benefited from the larger set . and it was also a diverse set with different noises and . , so , that seemed to be so , if you have that better recognizer that can build up more parameters , and if you , have the natural room , which in this case has a p a pretty bad signal - to - noise ratio , then in that case , the right thing to do is just do u use speaker adaptation . and and not bother with this acoustic , processing . but that would not be true if we did some explicit noise - processing as , the convolutional things we were doing . that 's what we found . phd a: i , started working on the mississippi state recognizer . so , i got in touch with joe and , from your email and things like that . phd a: and he gave me all of the pointers and everything that i needed . and so i downloaded the , there were two things , that they had to download . one was the , i the software . and another wad was a , like a sample run . so i downloaded the software and compiled all of that . and it compiled fine . phd a: and so i have n't grabbed the latest one that he just , put out yet . phd a: so . , but , the software seemed to compile fine and everything , so . and , professor b: is there any word yet about the issues about , adjustments for different feature sets or anything ? phd a: no , i d you asked me to write to him and i forgot to ask him about that . or if i did ask him , he did n't reply . i do n't remember yet . , i 'll d i 'll double check that and ask him again . professor b: it 's like that could r turn out to be an important issue for us . phd d: cuz they have , already frozen those in i insertion penalties and all those is what i feel . because they have this document explaining the recognizer . and they have these tables with , various language model weights , insertion penalties . phd d: and , on that , they have run some experiments using various insertion penalties and all those phd d: , p the one that they have reported is a nist evaluation , wall street journal . phd d: no . so they 're , like so they are actually trying to , fix that those values using the clean , training part of the wall street journal . which is , the aurora . aurora has a clean subset . , they want to train it and then this they 're going to run some evaluations . professor b: so they 're set they 're setting it based on that ? so now , we may come back to the situation where we may be looking for a modification of the features to account for the fact that we ca n't modify these parameters . but it 's still worth , just since , just chatting with joe about the issue . phd a: , do you think that 's something i should just send to him or do you think i should send it to this there 's an a m a mailing list . professor b: , it 's not a secret . , we 're , certainly willing to talk about it with everybody , but i think that , it 's probably best to start talking with him just to @ , it 's a dialogue between two of you about what , what does he think about this and what could be done about it . if you get ten people in involved in it there 'll be a lot of perspectives based on , how professor b: but , it all should come up eventually , but if there is any , way to move in a way that would , be more open to different kinds of features . but if , if there is n't , and it 's just shut down and then also there 's probably not worthwhile bringing it into a larger forum where political issues will come in . phd d: so this is now it 's compiled under solaris ? because he there was some mail r saying that it 's may not be stable for linux and all those . professor b: i noticed , just glancing at the , hopkins workshop , web site that , one of the thing i , we 'll see how much they accomplish , but one of the things that they were trying to do in the graphical models thing was to put together a , tool kit for doing , r , arbitrary graphical models for , speech recognition . so and jeff , the two jeffs were professor b: , he was here for a couple years and he , got his phd . he and he 's , been at ibm for the last couple years . professor b: , so he did his phd on dynamic bayes - nets , for speech recognition . he had some continuity built into the model , presumably to handle some , inertia in the production system , and , phd c: , i ' ve been playing with , first , the , vad . , so it 's exactly the same approach , but the features that the vad neural network use are , mfcc after noise compensation . , i have the results . phd c: this is what we get after this so , actually , we , here the features are noise compensated and there is also the lda filter . , and then it 's a pretty small neural network which use , nine frames of six features from c - zero to c - fives , plus the first derivatives . and it has one hundred hidden units . professor b: two outputs . so i about eleven thousand parameters , which actually should n't be a problem , even in small phones . phd c: it should be ok . so the previous syst it 's based on the system that has a fifty - three point sixty - six percent improvement . it 's the same system . the only thing that changed is the n a p a es the estimation of the silence probabilities . which now is based on , cleaned features . professor b: and , it 's a l it 's a lot better . that 's great . phd c: so it 's not bad , but the problem is still that the latency is too large . phd c: because the latency of the vad is two hundred and twenty milliseconds . and , the vad is used , i for on - line normalization , and it 's used before the delta computation . so if you add these components it goes t to a hundred and seventy , right ? professor b: i ' m confused . you started off with two - twenty and you ended up with one - seventy ? professor b: so it 's two - twenty . i the is this are these twenty - millisecond frames ? is that why ? is it after downsampling ? or phd c: the two - twenty is one hundred milliseconds for the no , it 's forty milliseconds for t for the , cleaning of the speech . then there is , the neural network which use nine frames . so it adds forty milliseconds . phd c: after that , you have the , filtering of the silence probabilities . which is a million filter it , and it creates a one hundred milliseconds delay . so , phd c: , there are twenty that comes from there is ten that comes from the lda filters also . right ? , so it 's two hundred and ten , phd d: if you are phrasing f using three frames , it is thirty here for delta . phd d: so five frames , that 's twenty . so it 's who un two hundred and ten . professor b: forty for the i n ann , a hundred for the smoothing . , but at ten , phd d: at th { nonvocalsound } at the input . , that 's at the input to the net . phd d: and there i so it 's like s five , six cepstrum plus delta at nine frames of professor b: ten milliseconds for lda filter , and t and ten another ten milliseconds you said for the frame ? phd c: for the frame i . i computed two - twenty , it 's i it 's for the fr the phd c: so this is the features that are used by our network and then afterwards , you have to compute the delta on the , main feature stream , which is , delta and double - deltas , which is fifty milliseconds . professor b: no , the after the noise part , the forty the other hundred and eighty , a minute . some of this is , is in parallel , is n't it ? , the lda , you have the lda as part of the v d - , vad ? or professor b: , it does ? so in that case there is n't too much in parallel . phd c: but , we could probably put the delta , before on - line normalization . it should not that make a big difference , phd a: could that help a little bit ? , i there 's a lot of things you could do to professor b: so if you put the delta before the , ana on - line if then it could go in parallel . phd c: cuz the time constant of the on - line normalization is pretty long compared to the delta window , so . it should not make professor b: and you ought to be able to shove tw , sh pull off twenty milliseconds from somewhere else to get it under two hundred , right ? professor b: the hundred milla mill a hundred milliseconds for smoothing is an arbitrary amount . it could be eighty and probably do @ phd a: i a hun wh - what 's the baseline you need to be under ? two hundred ? professor b: , we . they 're still arguing about it . , if it 's two if it 's , if it 's two - fifty , then we could keep the delta where it is if we shaved off twenty . if it 's two hundred , if we shaved off twenty , we could , meet it by moving the delta back . phd a: so , how do that what you have is too much if they 're still deciding ? professor b: , we do n't , but it 's just , the main thing is that since that we got burned last time , and , by not worrying about it very much , we 're just staying conscious of it . professor b: and so , th , if a week before we have to be done someone says , " , you have to have fifty milliseconds less than you have now " , it would be pretty frantic around here . phd a: but still , that 's a pretty big , win . and it does n't seem like you 're in terms of your delay , you 're , that professor b: he added a bit on , i , because before we were had were able to have the noise , , and the lva be in parallel . and now he 's requiring it to be done first . phd c: , but the main thing , maybe , is the cleaning of the speech , which takes forty milliseconds or so . phd d: , the lda we , is , like is it very crucial for the features , right ? phd c: this is the first try . , i maybe the lda 's not very useful then . professor b: , you have twenty for delta computation which y now you 're doing twice , but yo w were you doing that before ? phd c: , in the proposal , the input of the vad network were just three frames , . professor b: so , what you have now is fort , forty for the noise , twenty for the delta , and ten for the lda . that 's seventy milliseconds of which was formerly in parallel , that 's the difference as far as the timing , and you could experiment with cutting various pieces of these back a bit , we 're s we 're not in terrible shape . phd a: where where is this fifty - seven point o two in comparison to the last evaluation ? phd d: so this is like the first proposal . the proposal - one . it was forty - four , actually . professor b: and we still do n't have the neural net in . so so it 's so it 's we 're we 're doing better . professor b: , we 're getting better recognition . , i ' m other people working on this are not sitting still either , but , the important thing is that we learn how to do this better , and , . so , our , you can see the kind of numbers that we 're having , say , on speechdat - car which is a hard task , cuz it 's really , it 's just sort of reasonable numbers , starting to be . , it 's still terri phd c: , even for a - matched case it 's sixty percent error rate reduction , which is phd c: so actually , this is in between what we had with the previous vad and what sunil did with an idl vad . which gave sixty - two percent improvement , right ? phd c: so , if you use , like , an idl vad , for dropping the frames , phd c: the best that we can get i that means that we estimate the silence probability on the clean version of the utterances . then you can go up to sixty - two percent error rate reduction , globally . mmm phd a: so that would be even that would n't change this number down here to sixty - two ? phd c: if you add a g good v very good vad , that works as a vad working on clean speech , then you wou you would go professor b: probably . so fi si fifty - three is what you were getting with the old vad . and sixty - two with the , quote , unquote , cheating vad . and fifty - seven is what you got with the real vad . phd c: , the next thing is , i started to play , i do n't want to worry too much about the delay , no . maybe it 's better to for the decision from the committee . , but i started to play with the , , tandem neural network . did the configuration that 's very similar to what we did for the february proposal . so . there is a f a first feature stream that use straight mfcc features . , these features actually . and the other stream is the output of a neural network , using as input , also , these , cleaned mfcc . phd c: so there is just this feature stream , the fifteen mfcc plus delta and double - delta . phd c: , so it 's makes forty - five features that are used as input to the htk . and then , there is there are more inputs that comes from the tandem mlp . professor b: h he likes to use them both , cuz then it has one part that 's discriminative , one part that 's not . phd c: so , , . right now it seems that i tested on speechdat - car while the experiment are running on your on ti - digits . , it improves on the - matched and the mismatched conditions , but it get worse on the highly mismatched . phd c: like , on the - match and medium mismatch , the gain is around five percent relative , but it goes down a lot more , like fifteen percent on the hm case . professor b: so it 's you have seventy - three features , and you 're just feeding them like that . there is n't any klt or anything ? phd a: that 's how you get down to twenty - eight ? why twenty - eight ? phd c: i . it 's i it 's because it 's what we did for the first proposal . we tested , trying to go down phd c: but we have to for , we have to go down , because the limit is now sixty features . we have to find a way to decrease the number of features . phd a: so , it seems funny that i , maybe i do n't u quite understand everything , but that adding features i if you 're keeping the back - end fixed . maybe that 's it . because it seems like just adding information should n't give worse results . but i if you 're keeping the number of gaussians fixed in the recognizer , then professor b: , . but , just in general , adding information suppose the information you added , was a really terrible feature and all it brought in was noise . right ? so so , or or suppose it was n't completely terrible , but it was completely equivalent to another one feature that you had , except it was noisier . in that case you would n't necessarily expect it to be better . phd a: , i was n't necessarily saying it should be better . i ' m just surprised that you 're getting fifteen percent relative worse on the wel professor b: so , " highly mismatched condition " means that your training is a bad estimate of your test . so having , a g a l a greater number of features , if they are n't maybe the right features that you use , certainly can e can easily , make things worse . , you 're right . if you have if you have , lots and lots of data , and you have and your training is representative of your test , then getting more sources of information should just help . but but it 's it does n't necessarily work that way . so i wonder , what 's your thought about what to do next with it ? phd c: i ' m surprised , because i expected the neural net to help more when there is more mismatch , as it was the case for the phd c: , it 's the same training set , so it 's timit with the ti - digits ' , noises , added . professor b: , we might have to experiment with , better training sets . again . i the other thing is , before you found that was the best configuration , but you might have to retest those things now that we have different the rest of it is different , what 's the effect of just putting the neural net on without the o other path ? , what the straight features do . that gives you this . what it does in combination . you do n't necessarily phd a: what if you did the would it make sense to do the klt on the full set of combined features ? instead of just on the phd c: i g i . the reason i did it this ways is that in february , it we tested different things like that , so , having two klt , having just a klt for a network , or having a global klt . phd a: , i see . so you tried the global klt before and it did n't really phd c: and , th the differences between these configurations were not huge , but it was marginally better with this configuration . professor b: but , that 's another thing to try , since things are different . and i if the these are all so all of these seventy - three features are going into , the hmm . and is are i are any deltas being computed of tha of them ? phd c: , maybe we can add some context from these features also as dan did in his last work . professor b: could . i but the other thing i was thinking was , now i lost track of what i was thinking . but . phd a: what is the you said there was a limit of sixty features ? what 's the relation between that limit and the , forty - eight , forty eight hundred bits per second ? phd c: and generally , i it s allows you to transmit like , fifteen , cepstrum . professor b: the issue was that , this is supposed to be a standard that 's then gon na be fed to somebody 's recognizer somewhere which might be , it might be a concern how many parameters are use u used and . they felt they wanted to set a limit . so they chose sixty . some people wanted to use hundreds of parameters and that bothered some other people . u and so they just chose that . i it 's r arbitrary too . but but that 's what was chosen . i remembered what i was going to say . what i was going to say is that , maybe with the noise removal , these things are now more correlated . so you have two sets of things that are uncorrelated , within themselves , but they 're pretty correlated with one another . they 're being fed into these , variants , only gaussians and , so maybe it would be a better idea now than it was before to , have , one klt over everything , to de - correlate it . professor b: so we found this , this macrophone data , and , that we were using for these other experiments , to be pretty good . so that 's i after you explore these other alternatives , that might be another way to start looking , is just improving the training set . , we were getting , lots better recognition using that , than , you do have the problem that , u i we are not able to increase the number of gaussians , or anything to , to match anything . so we 're only improving the training of our feature set , but that 's still probably something . phd a: so you 're saying , add the macrophone data to the training of the neural net ? the tandem net ? professor b: that 's the only place that we can train . we ca n't train the other with anything other than the standard amount , so . professor b: , you did experiments back then where you made it bigger and it and that was the threshold point . much less than that , it was worse , and much more than that , it was n't much better . phd d: so is it though the performance , big relation in the high ma high mismatch has something to do with the , cleaning up that you that is done on the timit after adding noise ? so it 's i all the noises are from the ti - digits , right ? so you i phd d: , it 's like the high mismatch of the speechdat - car after cleaning up , maybe having more noise than the training set of timit after clean s after you do the noise clean - up . , earlier you never had any compensation , you just trained it straight away . so it had like all these different conditions of s n rs , actually in their training set of neural net . but after cleaning up you have now a different set of s n rs , right ? for the training of the neural net . and is it something to do with the mismatch that 's created after the cleaning up , like the high mismatch phd c: you mean the most noisy occurrences on speechdat - car might be a lot more noisy than professor b: so right . so the training the neural net is being trained with noise compensated . professor b: which makes sense , but , you 're saying , the noisier ones are still going to be , even after our noise compensation , are still gon na be pretty noisy . phd d: , so now the after - noise compensation the neural net is seeing a different set of s n rs than that was originally there in the training set . of timit . because in the timit it was zero to some clean . phd d: so the net saw all the snr @ conditions . now after cleaning up it 's a different set of snr . and that snr may not be , like , com covering the whole set of s n rs that you 're getting in the speechdat - car . professor b: but the speechdat - car data that you 're seeing is also reduced in noise by the noise compensation . phd d: , it is . but , i ' m saying , there could be some issues of phd c: , if the initial range of snr is different , we the problem was already there before . professor b: , it depends on whether you believe that the noise compensation is equally reducing the noise on the test set and the training set . professor b: , you 're saying there 's a mismatch in noise that was n't there before , but if they were both the same before , then if they were both reduic reduced equally , then , there would not be a mismatch . , this may be heaven forbid , this noise compensation process may be imperfect , so maybe it 's treating some things differently . phd d: , i . that could be seen from the ti - digits , testing condition because , the noises are from the ti - digits , right ? noise so cleaning up the ti - digits and if the performance goes down in the ti - digits mismatch high mismatch like this professor b: , one of the things about , the macrophone data , , it was recorded over many different telephones . so , there 's lots of different kinds of acoustic conditions . , it 's not artificially added noise or anything . so it 's not the same . i do n't think there 's anybody recording over a car from a car , but it 's varied enough that if doing this adjustments , and playing around with it does n't , make it better , the most , it seems like the most obvious thing to do is to improve the training set . , what we were the condition it it gave us an enormous amount of improvement in what we were doing with meeting recorder digits , even though there , again , these m macrophone digits were very , very different from , what we were going on here . , we were n't talking over a telephone here . but it was just having a variation in acoustic conditions was just a good thing . phd c: , actually to s , what i observed in the hm case is that the number of deletion dramatically increases . it it doubles . phd c: when i added the num the neural network it doubles the number of deletions . , so i do n't how to interpret that , but , mmm phd a: and and did an other numbers stay the same ? insertion substitutions stay the same ? phd c: they maybe they are a little bit , lower . they are a little bit better . but professor b: did they increase the number of deletions even for the cases that got better ? say , for the , it professor b: so it 's only the highly mismatched ? and it remind me again , the " highly mismatched " means that the phd c: it 's clean training , close microphone training and distant microphone , high speed , . phd c: but , but without the neural network it 's , it 's better . it 's just when we add the neural networks . phd a: that says that , the models in , the recognizer are really paying attention to the neural net features . professor b: actually { nonvocalsound } the timit noises are a range of noises and they 're not so much the stationary driving noises , right ? it 's it 's pretty different . is n't it ? phd c: , there is a car noise . so there are f just four noises . , " car " , babble , phd c: train station , . so it 's mostly , " car " is stationary , babble , it 's a stationary background plus some voices , some speech over it . and the other two are rather stationary also . professor b: , that if you run it actually , you maybe you remember this . when you in the old experiments when you ran with the neural net only , and did n't have this side path , , with the pure features as , did it make things better to have the neural net ? was it about the same ? , w i professor b: until you put the second path in with the pure features , the neural net was n't helping . , that 's interesting . phd c: it was helping , if the features are b were bad , just plain p l ps or m f c cs . as soon as we added lda on - line normalization , and all these things , then professor b: they were doing similar enough things . , i still think it would be k interesting to see what would happen if you just had the neural net without the side thing . and and the thing i have in mind is , maybe you 'll see that the results are not just a little bit worse . maybe that they 're a lot worse . ? but if on the ha other hand , it 's , say , somewhere in between what you 're seeing now and , what you 'd have with just the pure features , then maybe there is some problem of a , combination of these things , or correlation between them somehow . if it really is that the net is hurting you at the moment , then the issue is to focus on , improving the net . so what 's the overall effe , you have n't done all the experiments but you said it was i somewhat better , say , five percent better , for the first two conditions , and fifteen percent worse for the other one ? but it 's but that one 's weighted lower , phd c: i d it 's it was one or two percent . that 's not that bad , but it was l like two percent relative worse on speechdat - car . i have to check that . , i have i will . phd d: , it will overall it will be still better even if it is fifteen percent worse , because the fifteen percent worse is given like f w twenty - five point two five eight . professor b: so the so the worst it could be , if the others were exactly the same , is four , phd d: , so it 's four . is i so either it 'll get cancelled out , or you 'll get , like , almost the same . phd a: , i ' ve been wondering about something . in the , a lot of the , the hub - five systems , recently have been using lda . and they , they run lda on the features right before they train the models . so there 's the lda is right there before the h m so , you guys are using lda but it seems like it 's pretty far back in the process . phd d: , this lda is different from the lda that you are talking about . the lda that you saying is , like , you take a block of features , like nine frames , and then do an lda on it , and then reduce the dimensionality to something like twenty - four like that . phd d: so this is a two dimensional tile . and the lda that we are f applying is only in time , not in frequency high cost frequency . so it 's like more like a filtering in time , rather than doing a r phd a: . ok . so what i what about , i u what i w , i if this is a good idea or not , but what if you put ran the other lda , on your features right before they go into the hmm ? phd c: what do we do with the ann is something like that except that it 's not linear . but it 's like a nonlinear discriminant analysis . phd a: i g but , w but the other features that you have , th the non - tandem ones , phd c: , i know . that that . , in the proposal , they were transformed u using pca , but , it might be that lda could be better . professor b: the a the argument i is i in and it 's not like we really know , but the argument anyway is that , , we always have the prob , discriminative things are good . lda , neural nets , they 're good . , they 're good because you learn to distinguish between these categories that you want to be good at distinguishing between . and pca does n't do that . it pac - pca low - order pca throws away pieces that are , maybe not gon na be helpful just because they 're small , . but , the problem is , training sets are n't perfect and testing sets are different . so you f you face the potential problem with discriminative , be it lda or neural nets , that you are training to discriminate between categories in one space but what you 're really gon na be g getting is something else . and so , stephane 's idea was , let 's feed , both this discriminatively trained thing and something that 's not . so you have a good set of features that everybody 's worked really hard to make , and then , you discriminately train it , but you also take the path that does n't have that , and putting those in together . and that seem so it 's like a combination of the , what , dan has been calling , a feature combination versus posterior combination . it 's it 's , you have the posterior combination but then you get the features from that and use them as a feature combination with these other things . and that seemed , at least in the last one , as he was just saying , he when he only did discriminative , i it actually was it did n't help in this particular case . there was enough of a difference , i , between the testing and training . but by having them both there the fact is some of the time , the discriminative is gon na help you . and some of the time it 's going to hurt you , and by combining two information sources if , if phd a: so you would n't necessarily then want to do lda on the non - tandem features because now you 're doing something to them that professor b: now , again , it 's we 're just trying these different things . we do n't really 's gon na work best . but if that 's the hypothesis , at least it would be counter to that hypothesis to do that . and in principle you would think that the neural net would do better at the discriminant part than lda . phd a: exactly . , we were getting ready to do the tandem , for the hub - five system , and , andreas and i talked about it , and the idea w the thought was , " , , that i th the neural net should be better , but we should at least have , a number , to show that we did try the lda in place of the neural net , so that we can , show a clear path . , that you have it without it , then you have the lda , then you have the neural net , and you can see , theoretically . i was just wondering i phd a: that 's what that 's what we 're gon na do next as soon as i finish this other thing . professor b: no , no , but it might not even be true . , it 's it 's a great idea . , one of the things that always disturbed me , in the resurgence of neural nets that happened in the eighties was that , a lot of people because neural nets were pretty easy to use a lot of people were just using them for all sorts of things without , looking into the linear , versions of them . and , people were doing recurrent nets but not looking at iir filters , and , , so , it 's definitely a good idea to try it . phd a: and everybody 's putting that on their systems now , and so , i that 's what made me wonder about this , professor b: , they ' ve been putting them in their systems off and on for ten years , phd a: , what is it 's like in the hub - five evaluations , and you read the system descriptions and everybody 's got , lda on their features . phd c: it 's the transformation they 're estimating on , they are trained on the same data as the final hmm are . phd a: , so it 's different . exactly . cuz they do n't have these , mismatches that you guys have . so that 's why i was wondering if maybe it 's not even a good idea . i enough about it , professor b: , part of why you were getting into the klt y you were describing to me at one point that you wanted to see if , , getting good orthogonal features was and combining the different temporal ranges was the key thing that was happening or whether it was this discriminant thing , right ? so you were just trying you r , this is it does n't have the lda aspect but th as far as the orthogonalizing transformation , you were trying that at one point , right ? you were . does something . it does n't work as . phd d: so , i ' ve been exploring a parallel vad without neural network with , like , less latency using snr and energy , after the cleaning up . so what i 'd been trying was , after the b after the noise compensation , n i was trying t to f find a f feature based on the ratio of the energies , that is , cl after clean and before clean . so that if they are , like , pretty c close to one , which means it 's speech . and if it is n if it is close to zero , which is so it 's like a scale @ probability value . so i was trying , with full band and multiple bands , m ps separating them to different frequency bands and deriving separate decisions on each bands , and trying to combine them . the advantage being like it does n't have the latency of the neural net if it can g and it gave me like , one point one more than one percent relative improvement . so , from fifty - three point six it went to fifty f four point eight . so it 's , like , only slightly more than a percent improvement , just like which means that it 's doing a slightly better job than the previous vad , at a l lower delay . so , phd d: so d with the delay , that 's gone is the input , which is the sixty millisecond . the forty plus twenty . at the input of the neural net you have this , f nine frames of context plus the delta . phd d: so that delay , plus the lda . , so the delay is only the forty millisecond of the noise cleaning , plus the hundred millisecond smoothing at the output . so the di the biggest the problem f for me was to find a consistent threshold that works across the different databases , because i t i try to make it work on tr speechdat - car and it fails on ti - digits , or if i try to make it work on that it 's just the italian , it does n't work on the finnish . so , so there are there was , like , some problem in balancing the deletions and insertions when i try different thresholds . the i ' m still trying to make it better by using some other features from the after the p clean up maybe , some , correlation auto - correlation or some s additional features of to mainly the improvement of the vad . i ' ve been trying . professor b: now this , " before and after clean " , it sounds like you think that 's a good feature . that that , it you th think that the , the i it appears to be a good feature , right ? what about using it in the neural net ? phd d: so that 's the so we ' ve been thinking about putting it into the neural net also . because they did that itself professor b: so if we can live with the latency or cut the latencies elsewhere , then that would be a , good thing . , anybody has anybody you guys or naren , somebody , tried the , , second th second stream thing ? phd d: , i just h put the second stream in place and , ran one experiment , but just like just to know that everything is fine . so it was like , forty - five cepstrum plus twenty - three mel log mel . and and , just , like , it gave me the baseline performance of the aurora , which is like zero improvement . so tried it on italian just to know that everything is but i did n't export anything out of it because it was , like , a weird feature set . so . professor b: , what , would be more what you 'd want to do is , put it into another neural net . professor b: , we 're not quite there yet . so we have to figure out the neural nets , i . phd d: the , other thing i was wondering was , if the neural net , has any because of the different noise con unseen noise conditions for the neural net , where , like , you train it on those four noise conditions , while you are feeding it with , like , a additional some four plus some f few more conditions which it has n't seen , actually , from the f while testing . phd d: instead of just h having c , those cleaned up t cepstrum , sh should we feed some additional information , like the the we have the vad flag . , should we f feed the vad flag , also , at the input so that it has some additional discriminating information at the input ? phd d: we have the vad information also available at the back - end . so if it is something the neural net is not able to discriminate the classes because most of it is sil , we have dropped some silence f we have dropped so silence frames ? no , we have n't dropped silence frames still . phd d: the b biggest classification would be the speech and silence . so , by having an additional , feature which says " this is speech and this is nonspeech " , it certainly helps in some unseen noise conditions for the neural net . phd d: , we have we are transferring the vad to the back - end feature to the back - end . because we are dropping it at the back - end after everything all the features are computed . phd d: so the neural so that is coming from a separate neural net or some vad . which is which is certainly giving a professor b: you could feed it into the neural net . the other thing you could do is just , p modify the , output probabilities of the , , neural net , tandem neural net , based on the fact that you have a silence probability . so you have an independent estimator of what the silence probability is , and you could multiply the two things , and renormalize . , you 'd have to do the nonlinearity part and deal with that . , go backwards from what the nonlinearity would , would be . phd a: but in principle would n't it be better to feed it in ? and let the net do that ? professor b: , u not . let 's put it this way . , y you have this complicated system with thousands and thousand parameters and you can tell it , " learn this thing . " or you can say , " it 's silence ! go away ! " , i does n't ? i think the second one sounds a lot more direct . phd a: what if you so , what if you then , since this , what if you only use the neural net on the speech portions ? professor b: but it 's @ , no , you want to train on the nonspeech also , because that 's part of what you 're learning in it , to generate , that it 's it has to distinguish between . phd a: but , if you 're gon na if you 're going to multiply the output of the net by this other decision , would then you do n't care about whether the net makes that distinction , right ? professor b: but this other thing is n't perfect . so that you bring in some information from the net itself . professor b: now the only thing that bothers me about all this is that i the the fact i it 's bothersome that you 're getting more deletions . phd c: so i might maybe look at , is it due to the fact that , the probability of the silence at the output of the network , is , phd d: it may not be it may be too it 's too high in a sense , like , everything is more like a , flat probability . phd d: so , like , it 's not really doing any distinction between speech and nonspeech or , different among classes . phd a: be interesting to look at the for the i wonder if you could do this . but if you look at the , highly mism high mismat the output of the net on the high mismatch case and just look at , the distribution versus the other ones , do you see more peaks ? phd c: it it seems that the vad network does n't , it does n't drop , too many frames because the dele the number of deletion is reasonable . but it 's just when we add the tandem , the final mlp , and then professor b: now the only problem is you do n't want to ta i for the output of the vad before you can put something into the other system , professor b: cuz that 'll shoot up the latency a lot , am i missing something here ? so that 's maybe a problem with what i was just saying . but i phd a: but if you were gon na put it in as a feature it means you already have it by the time you get to the tandem net , right ? phd d: . we w we do n't have it , actually , because it 's it has a high rate energy the vad has a professor b: it 's done in , some of the things are , not in parallel , but certainly , it would be in parallel with the with a tandem net . in time . so maybe , if that does n't work , but it would be interesting to see if that was the problem , anyway . and and then i another alternative would be to take the feature that you 're feeding into the vad , and feeding it into the other one as . and then maybe it would just learn it better . but that 's , that 's an interesting thing to try to see , if what 's going on is that in the highly mismatched condition , it 's , causing deletions by having this silence probability up too high , at some point where the vad is saying it 's actually speech . which is probably true . professor b: cuz , the v a if the vad said since the vad is right a lot , anyway . might be . , we just started working with it . but these are some good ideas . phd c: , and the other thing , there are other issues maybe for the tandem , like , , do we want to , w n do we want to work on the targets ? or , like , instead of using phonemes , using more context dependent units ? phd c: , i ' m thinking , also , a w about dan 's work where he trained a network , not on phoneme targets but on the hmm state targets . it was giving s slightly better results . professor b: problem is , if you are going to run this on different m test sets , including large vocabulary , phd c: i was just thinking maybe about , like , generalized diphones , and come up with a reasonable , not too large , set of context dependent units , and then anyway we would have to reduce this with the klt . professor b: , maybe . but i d it i it 's all worth looking at , but it sounds to me like , looking at the relationship between this and the speech noise is probably a key thing . that and the correlation between . phd a: so if , if the , high mismatch case had been more like the , the other two cases in terms of giving you just a better performance , how would this number have changed ? professor b: , we what 's it 's gon na be the ti - digits yet . he has n't got the results back yet . phd c: if you extrapolate the speechdat - car - matched and medium - mismatch , it 's around , maybe five . phd c: , it 's around five percent , because it 's if everything is five percent . phd c: i d have the speechdat - car right now , so it 's running it shou we should have the results today during the afternoon , professor b: , i ' m leaving next wednesday . may or may not be in the morning . i leave in the afternoon . professor b: i ' m talking about next week . i ' m leaving next wednesday . this afternoon , right , for the meeting meeting ? , that 's just cuz of something on campus . so next week i wo n't , and the week after i wo n't , cuz i 'll be in finland . and the week after that i wo n't . by that time you 'll be , you 'll both be gone from here . so there 'll be no definitely no meeting on september sixth . professor b: so , sunil will be in oregon . , stephane and i will be in denmark . so it 'll be a few weeks , really , before we have a meeting of the same cast of characters . i , just , you guys should probably meet . and maybe barry will be around . and and then , we 'll start up again with dave and barry and stephane and us on the , twentieth . thirteenth ? about a month ? professor b: i ' m gone for two and a half weeks starting next we d - late next wednesday . phd a: so that 's you wo n't be at the next three of these meetings . is that right ? professor b: , i wo n't it 's probably four because of is it three ? let 's see , twenty - third , thirtieth , sixth . that 's right , next three . and the third one wo n't probably wo n't be a meeting , cuz , su - sunil , stephane , and i will all not be here . professor b: mmm . so it 's just , the next two where there will be there , may as be meetings , but wo n't be at them . and then starting up on the thirteenth , { nonvocalsound } , we 'll have meetings again but we 'll have to do without sunil here somehow . phd d: p s it 's like , it 's tentatively all full . , that 's a proposed date , i . ###summary: icsi's meeting recorder group at berkeley meets to discuss , for the most part , progress on the aurora project. the main areas being worked on were the voice activity detector and the tandem data streams. the group discussed possible further investigations that arose from these areas , including better linking the two. they also consider how aspects of an absent member's work might be applied to the current project. the meeting closed with a discussion of upcoming absences , and how meetings would continue. speaker me018 must confirm what is needed to work with the new software in terms of adjustments with someone further up the project chain. the system at it's current stage employs the neural networks and second stream , but the group leader would like the network investigated separately , incase it is hurting performance. there are worries regarding the need to make adjustments so the new software can handle the group's different feature set. the system , whilst improved , also has increased latency , and while the limit has not been set , the group need to reduce it. likewise the number of features the use in their system , since this has been set at an arbitrarily low value. there has been an increase in the number of deletion in the errors , which is of some concern. speaker mn007 has been implementing a new voice activity detector on noise compensated data , and it performs much better. he has also been working on the tandem neural network. speaker me013 , along with a student , submitted work on reverberation for a speech workshop. speaker me018 has downloaded and compiled the software he was asked to work with in the previous meeting.
37
professor b: so ne next week we 'll have , both birger and , mike michael kleinschmidt and birger kollmeier will join us . professor b: , and you 're probably gon na go up in a couple three weeks or so ? when d when are you thinking of going up to , ogi ? professor b: ok . good . so at least we 'll have one meeting with yo with you still around , and that 's good . phd d: . so there was this conference call this morning , and the only topic on the agenda was just to discuss a and to come at , to get a decision about this latency problem . professor b: no , this i ' m , this is a conference call between different aurora people or just ? phd d: , . there were like two hours of discussions , and then suddenly , people were tired , i , and they decided on { nonvocalsound } a number , two hundred and twenty , included e including everything . , it means that it 's like eighty milliseconds less than before . phd d: so , currently d , we have system that has two hundred and thirty . so , that 's fine . phd d: so that 's the system that 's described on the second point of this document . professor b: w it 's it 's p d primary primarily determined by the vad at this point , right ? s so we can make the vad a little shorter . professor b: . we probably should do that pretty soon so that we do n't get used to it being a certain way . was hari on the phone ? phd d: , . , it was mainly a discussion between hari and david , who was like mmm , . so , the second thing is the system that we have currently . , yes . we have , like , a system that gives sixty - two percent improvement , but if you want to stick to the this latency , it has a latency of two thirty , but if you want also to stick to the number of features that limit it to sixty , then we go a little bit down but it 's still sixty - one percent . , and if we drop the tandem network , then we have fifty - seven percent . professor b: , but th the two th two thirty includes the tandem network ? and i is the tandem network , small enough that it will fit on the terminal size in terms of ? phd d: no . it 's still in terms of computation , if we use , like , their way of computing the maps the mips , it fits , phd d: and i how much this can be discussed or not , because it 's it could be in rom , so it 's maybe not that expensive . but phd d: i d , i do n't kn remember exactly , but . , i c i have to check that . professor b: . i 'd like to see that , cuz maybe i could think a little bit about it , cuz we maybe we could make it a little smaller or , it 'd be neat if we could fit it all . , i 'd like to see how far off we are . but i it 's still within their rules to have it on the , t , server side . right ? professor b: and this is still ? , y you 're saying here . i c i should just let you go on . phd d: , there were small tricks to make this tandem network work . , mmm , and one of the trick was to , use some hierarchical structure where the silence probability is not computed by the final tandem network but by the vad network . so it looks better when , we use the silence probability from the vad network and we re - scale the other probabilities by one minus the silence probability . so it 's some hierarchical thing , that sunil also tried , on spine and it helps a little bit also . and . , the reason w why we did that with the silence probability was that , professor b: could ? , i ' m really . can you repeat what you were saying about the silence probability ? i only my mind was some phd d: so there is the tandem network that e estimates the phone probabilities and the silence probabilities also . and things get better when , instead of using the silence probability computed by the tandem network , we use the silence probability , given by the vad network , phd d: which is smaller , but maybe , so we have a network for the vad which has one hundred hidden units , and the tandem network has five hundred . so it 's smaller but th the silence probability from this network seems , better . , it looks strange , but phd d: but it maybe it 's has something to do to the fact that we do n't have infinite training data and grad e: are you were going to say why what made you wh what led you to do that . phd d: . , there was a p problem that we observed , that there was there were , like , many insertions in the system . actually plugging in the tandem network was increasing , i , the number of insertions . and , so it looked strange and then just using the other silence probability helps . the next thing we will do is train this tandem on more data . professor b: so , in a way what it might i it 's a little bit like combining knowledge sources . because the fact that you have these two nets that are different sizes means they behave a little differently , they find different things . and , if you have , f the distribution that you have from , f speech sounds is w one source of knowledge . and this is and rather than just taking one minus that to get the other , which is essentially what 's happening , you have this other source of knowledge that you 're putting in there . so you make use of both of them in what you 're ending up with . maybe it 's better . anyway , you can probably justify anything if what 's use phd d: and and the features are different also . , the vad does n't use the same features there are . professor b: ! that might be the key , actually . cuz you were really thinking about speech versus nonspeech for that . that 's a good point . phd d: . , there are other things that we should do but , it requires time and we have ideas , like so , these things are like hav having a better vad . , we have some ideas about that . it would probably implies working a little bit on features that are more suited to a voice activity detection . working on the second stream . we have ideas on this also , but w we need to try different things , but their noise estimation , professor b: , back on the second stream , that 's something we ' ve talked about for a while . , { nonvocalsound } that 's certainly a high hope . professor b: so we have this default idea about just using some purely spectral thing ? for a second stream ? phd d: . it was c it was just combined , by the acoustic model . so there was , no neural network for the moment . professor b: right . so , if you just had a second stream that was just spectral and had another neural net and combined there , that , might be good . phd d: . - . , and the other thing , that noise estimation and th , maybe try to train , the training data for the t tandem network , right now , is like i is using the noises from the aurora task and that people might , try to argue about that because then in some cases we have the same noises in for training the network than the noises that are used for testing , so we have t n , to try to get rid of these this problem . professor b: , it 's probably helpful to have a little noise there . but it may be something else th at least you could say it was . and then if it does n't hurt too much , though . that 's a good idea . phd d: . the last thing is that we are getting close to human performance . , that 's something i would like to investigate further , but , i did , like , i did , listen to the m most noisy utterances of the speechdat - car italian and tried to transcribe them . phd d: that 's the flaw of the experiment . this is just i j it 's just one subject , phd d: but still , what happens is that , the digit error rate on this is around one percent , while our system is currently at seven percent . , but what happens also is that if i listen to the , { nonvocalsound } a re - synthesized version of the speech and i re - synthesized this using a white noise that 's filtered by a lpc , filter , you can argue , that this is not speech , so the ear is not trained to recognize this . but s actually it sound like whispering , so we are professor b: , it 's there 's two problems there . i mean , so the first is that by doing lpc - twelve with synthesized speech w like you 're saying , it 's i you 're adding other degradation . so it 's not just the noise but you 're adding some degradation because it 's only an approximation . and the second thing is which is m maybe more interesting is that , if you do it with whispered speech , you get this number . what if you had done analysis re - synthesis and taken the pitch as ? alright ? so now you put the pitch in . what would the percentage be then ? see , that 's the question . so , you see , if it 's , let 's say it 's back down to one percent again . that would say at least for people , having the pitch is really , really important , which would be interesting in itself . professor b: if i on the other hand , if it stayed up near five percent , then i 'd say " boy , lpc n twelve is pretty crummy " . ? so i ' m not how we can conclude from this anything about that our system is close to the human performance . phd d: ye . , that l ey that , what i listened to when i re - synthesized the lp - the lpc - twelve spectrum is in a way what the system , is hearing , cuz @ all the , excitation all the , the excitation is not taken into account . that 's what we do with our system . professor b: twenty . , th lpc is not a really great representation of speech . so , all i ' m saying is that you have in addition to the w the , removal of pitch , you also are doing , a particular parameterization , which , so , let 's see , how would you do ? so , fo professor b: no . actually , we d we do n't , because we do , , mel filter bank , . phd a: could n't you t could n't you , test the human performance on just the original audio ? phd a: ok . so , y , your performance was one percent , and then when you re - synthesize with lpc - twelve it went to five . ok . professor b: we were we were j it it 's a little bit still apples and oranges because we are choosing these features in order to be the best for recognition . i if you listen to them they still might not be very even if you made something closer to what we 're gon na i it might not sound very good . , and i the degradation from that might actually make it even harder , to understand than the lpc - twelve . so all i ' m saying is that the lpc - twelve puts in synthesis puts in some degradation that 's not what we 're used to hearing , and is , it 's not it 's not just a question of how much information is there , as if you will always take maximum advantage of any information that 's presented to you . , you hear some things better than others . and so it is n't professor b: but , i agree that it says that , the information that we 're feeding it is probably , a little bit , minimal . there 's definitely some things that we ' ve thrown away . and that 's why i was saying it might be interesting if you an interesting test of this would be if you actually put the pitch back in . so , you just extract it from the actual speech and put it back in , and see does that is that does that make the difference ? if that if that takes it down to one percent again , then you 'd say " ok , it 's having , not just the spectral envelope but also the pitch that , @ has the information that people can use , anyway . " phd a: but from this it 's pretty safe to say that the system is with either two to seven percent away from the performance of a human . right ? so it 's somewhere in that range . professor b: , so it 's it 's one point four times , to , seven times the error , professor b: for stephane . so , but i i . i do do n't wanna take you away from other things . professor b: but that 's what that 's the first thing that i would be curious about , is , i when you we phd d: but the signal itself is like a mix of , of a periodic sound and , @ , unvoiced sound , and the noise which is mostly , noise . not periodic . so , what do you mean exactly by putting back the pitch in ? because professor b: . you did lpc re - synthesis l pc re - synthesis . so , and you did it with a noise source , rather than with a s periodic source . so if you actually did real re - synthesis like you do in an lpc synthesizer , where it 's unvoiced you use noise , where it 's voiced you use , periodic pulses . phd d: , but it 's neither purely voiced or purely unvoiced . esp - especially because there is noise . professor b: but it but that if you , if you detect that there 's periodic s strong periodic components , then you can use a voiced voice thing . , it 's probably not worth your time . it 's it 's a side thing and there 's a lot to do . professor b: but i ' m just saying , at least as a thought experiment , that 's what i would wanna test . , i wan would wanna drive it with a two - source system rather than a one - source system . and then that would tell you whether it 's cuz we ' ve talked about , like , this harmonic tunneling or other things that people have done based on pitch , maybe that 's really a key element . maybe maybe , , without that , it 's not possible to do a whole lot better than we 're doing . that that could be . phd d: that 's what i was thinking by doing this es experiment , like mmm . evi professor b: but , other than that , i do n't think it 's , other than the pitch de information , it 's hard to imagine that there 's a whole lot more in the signal that , that we 're throwing away that 's important . professor b: right ? , we 're using a fair number of filters in the filter bank . that look professor b: that 's that 's , one percent is what i would figure . if somebody was paying really close attention , you might get i would actually think that if , you looked at people on various times of the day and different amounts of attention , you might actually get up to three or four percent error on digits . , we 're not incredibly far off . on the other hand , with any of these numbers except maybe the one percent , it 's st it 's not actually usable in a commercial system with a full telephone number . professor b: good . , while we 're still on aurora maybe you can talk a little about the status with the , wall street journal things for it . phd a: so i ' ve , downloaded , a couple of things from mississippi state . , one is their software their , lvcsr system . downloaded the latest version of that . got it compiled and everything . , downloaded the scripts . they wrote some scripts that make it easy to run the system on the wall street journal , data . , so i have n't run the scripts yet . , i ' m waiting there was one problem with part of it and i wrote a note to joe asking him about it . so i ' m waiting to hear from him . but , i did print something out just to give you an idea about where the system is . , they on their web site they , did this little table of where their system performs relative to other systems that have done this task . and , the mississippi state system using a bigram grammar , is at about eight point two percent . other comparable systems from , were getting from , like six point nine , six point eight percent . so they 're phd a: this is on clean . they they ' ve started a table where they 're showing their results on various different noise conditions but they do n't have a whole lot of it filled in and i did n't notice until after i 'd printed it out that , they do n't say here what these different testing conditions are . you actually have to click on it on the web site to see them . so i what those numbers really mean . phd a: , see , i was a little confused because on this table , i ' m the they 're showing word error rate . but on this one , i if these are word error rates because they 're really big . so , under condition one here it 's ten percent . then under three it goes to sixty - four point six percent . phd a: so m i maybe they 're error rates but they 're , they 're really high . professor b: , we w what 's some of the lower error rates on , some of the higher error rates on , some of these w , highly mismatched difficult conditions ? what 's a ? professor b: and if you 're saying sixty - thousand word recognition , getting sixty percent error on some of these noise condition not surprising . phd a: so , that 's probably what it is then . so they have a lot of different conditions that they 're gon na be filling out . professor b: it 's a bad sign when you looking at the numbers , you ca n't tell whether it 's accuracy or error rate . phd a: . it 's it 's gon na be hard . , they 're i ' m still waiting for them to release the , multi - cpu version of their scripts , cuz right now their script only handles processing on a single cpu , which will take a really long time to run . so . but their s phd a: i beli yes , for the training also . and , they 're supposed to be coming out with it any time , the multi - cpu one . so , as soon as they get that , then i 'll grab those too and so w phd a: . i 'll go ahead and try to run it though with just the single cpu one , phd a: and i they , released like a smaller data set that you can use that only takes like sixteen hours to train and . so i can run it on that just to make that the thing works and everything . professor b: ! good . cuz we 'll i the actual evaluation will be in six weeks . so . is that about right you think ? phd a: it was n't on the conference call this morning ? did they say anything on the conference call about , how the wall street journal part of the test was going to be run ? because i remembered hearing that some sites were saying that they did n't have the compute to be able to run the wall street journal at their place , so there was some talk about having mississippi state run the systems for them . and i did did that come up ? phd d: , no . , this first , this was not the point of this the meeting today phd d: , frankly , i because i d did n't read also the most recent mails about the large - vocabulary task . but , did you do you still , get the mails ? you 're not on the mailing list or what ? professor b: i have to say , there 's something funny - sounding about saying that one of these big companies does n't have enough cup compute power do that , so they 're having to have it done by mississippi state . it just sounds funny . phd a: because there 's this whole issue about , simple tuning parameters , like word insertion penalties . and whether or not those are going to be tuned or not , and so . , it makes a big difference . if you change your front - end , the scale is completely can be completely different , so . it seems reasonable that at least should be tweaked to match the front - end . phd a: i did , but joe said , " what you 're saying makes sense and i " . so he does n't the answer is . , that 's th we had this back and forth a little bit about , are sites gon na are you gon na run this data for different sites ? and , if mississippi state runs it , then maybe they 'll do a little optimization on that parameter , and , but then he was n't asked to run it for anybody . so i it 's just not clear yet what 's gon na happen . , he 's been putting this out on their web site and for people to grab but i have n't heard too much about what 's happening . professor b: so it could be , chuck and i had actually talked about this a couple times , and over some lunches , that , one thing that we might wanna do the - there 's this question about , what do you wanna scale ? suppose y you ca n't adjust these word insertion penalties and , so you have to do everything at the level of the features . what could you do ? and , one thing i had suggested at an earlier time was maybe some scaling , some root of the , , features . but the problem with that is that is n't quite the same , it occurred to me later , because what you really want to do is scale the , @ the range of the likelihoods rather than professor b: but , what might get at something similar , it just occurred to me , is an intermediate thing is because we do this strange thing that we do with the tandem system , at least in that system what you could do is take the , , values that come out of the net , which are something like log probabilities , and scale those . and then , then at least those things would have the right values or the right range . and then that goes into the rest of it and then that 's used as observations . so it 's , another way to do it . professor b: but but , so because what we 're doing is pretty strange and complicated , we do n't really the effect is at the other end . so , my thought was maybe , they 're not used as probabilities , but the log probabilities we 're taking advantage of the fact that something like log probabilities has more of a gaussian shape than gaus - than probabilities , and so we can model them better . so , in a way we 're taking advantage of the fact that they 're probabilities , because they 're this quantity that looks gaussian when you take it 's log . so , maybe it would have a reasonable effect to do that . i d i . but , i we still have n't had a ruling back on this . and we may end up being in a situation where we just really ca n't change the word insertion penalty . but the other thing we could do is also we could , this may not help us , in the evaluation but it might help us in our understanding at least . we might , just run it with different insper insertion penalties , and show that , " , ok , not changing it , playing the rules the way you wanted , we did this . but if we did that , it made a big difference . " phd a: i wonder if it might be possible to , simulate the back - end with some other system . so we get our f front - end features , and then , as part of the process of figuring out the scaling of these features , if we 're gon na take it to a root or to a power , we have some back - end that we attach onto our features that simulates what would be happening . phd a: and just adjust it until that our l version of the back - end , decides that professor b: , we can probably use the real thing , ca n't we ? and then jus just , use it on a reduced test set . phd a: . , . that 's true . and then we just use that to determine some scaling factor that we use . professor b: . so , that 's a reasonable thing to do and the only question is what 's the actual knob that we use ? and the knob that we use should , unfortunately , like i say , i the analytic solution to this cuz what we really want to do is change the scale of the likelihoods , not the cha not the scale of the observations . phd a: it 's the same system that they use when they participate in the hub - five evals . it 's a , came out of , looking a lot like htk . , they started off with , when they were building their system they were always comparing to htk to make they were getting similar results . and so , it 's a gaussian mixture system , professor b: do they have the same mix - down procedure , where they start off with a small number of some things professor b: d do what tying they use ? are they some a bunch of gaussians that they share across everything ? or or if it 's ? phd a: , th i have i do n't have it up here but i have a the whole system description , that describes exactly what their system is and i ' m not . but , it 's some a mixture of gaussians and , clustering they 're they 're trying to put in all of the standard features that people use nowadays . professor b: so the other , aurora thing maybe is if any of this is gon na come in time to be relevant , but , we had talked about , guenter playing around , over in germany and , @ , possibly coming up with something that would , , fit in later . , i saw that other mail where he said that he , it was n't going to work for him to do cvs . professor b: so he just has it all sitting there . so if he 'll he might work on improving the noise estimate or on some histogram things , or . saw the eurospeech we we did n't talk about it at our meeting but saw the just read the paper . someone , i forget the name , and ney , about histogram equalization ? did you see that one ? phd d: it was something similar to n on - line normalization finally , in the idea of normalizing professor b: . but it 's a little more it 's a little finer , so they had like ten quantiles and they adjust the distribution . professor b: and then , so this is just a histogram of the amplitudes , i . and then , people do this in image processing some . you have this histogram of levels of brightness or whatever . and and then , when you get a new thing that you want to adjust to be better in some way , you adjust it so that the histogram of the new data looks like the old data . you do this piece - wise linear or , some piece - wise approximation . they did a one version that was piece - wise linear and another that had a power law thing between them between the points . they said they s they see it in a way as s for the speech case as being a generalization of spectral subtraction in a way , because , in spectral subtraction you 're trying to get rid of this excess energy . , it 's not supposed to be there . and , this is adjusting it for a lot of different levels . and then they have s they have some , a floor , so if it gets too low you do n't do it . and they claimed very results , phd a: and and that , histogram represents the different energy levels that have been seen at that frequency ? professor b: i do n't remember that . and how often they you ' ve seen them . . and they do they said that they could do it for the test so you do n't have to change the training . you just do a measurement over the training . and then , for testing , you can do it for one per utterance . even relatively short utterances . and they claim it works pretty . phd a: so they , is the idea that you run a test utterance through some histogram generation thing and then you compare the histograms and that tells you what to do to the utterance to make it more like ? professor b: i in pri in principle . i did n't read carefully how they actually implemented it , whether it was some , on - line thing , or whether it was a second pass , or what . but but they that that was the idea . so that seemed , different . we 're curious about , what are some things that are , u , @ conceptually quite different from what we ' ve done . cuz we , one thing that w that , stephane and sunil seemed to find , was , they could actually make a unified piece of software that handled a range of different things that people were talking about , and it was really just setting of different constants . and it would turn , one thing into another . it 'd turn wiener filtering into spectral subtraction , or whatever . but there 's other things that we 're not doing . so , we 're not making any use of pitch , which again , might be important , because the between the harmonics is probably a schmutz . and and the , transcribers will have fun with that . the , at the harmonics is n't so much . and and , and we there 's this overall idea of really matching the hi distributions somehow . , not just subtracting off your estimate of the noise . so i , guenter 's gon na play around with some of these things now over this next period , or ? professor b: , he 's got it anyway , so he can . so potentially if he came up with something that was useful , like a diff a better noise estimation module , he could ship it to you guys u up there we could put it in . professor b: that 's good . so , why do n't we just , starting a w couple weeks from now , especially if you 're not gon na be around for a while , we 'll be shifting more over to some other territory . but , , n not so much in this meeting about aurora , but , , maybe just , quickly today about maybe you could just say a little bit about what you ' ve been talking about with michael . and and then barry can say something about what we 're talking about . grad c: ok . so michael kleinschmidt , who 's a phd student from germany , showed up this week . he 'll be here for about six months . and he 's done some work using an auditory model of , human hearing , and using that f , to generate speech recognition features . and he did work back in germany with , a toy recognition system using , isolated digit recognition as the task . it was actually just a single - layer neural network that classified words classified digits , . , and he tried that on some aurora data and got results that he thought seemed respectable . and he w he 's coming here to u use it on a , a real speech recognition system . so i 'll be working with him on that . and , maybe i should say a little more about these features , although i do n't understand them that . the it 's a two - stage idea . and , the first stage of these features correspond to what 's called the peripheral auditory system . and i that is like a filter bank with a compressive nonlinearity . and i ' m - i ' m not what we have @ in there that is n't already modeled in something like , plp . i should learn more about that . and then the second stage is , the most different thing , from what we usually do . it 's , it computes features which are , based on like based on diffe different w , wavelet basis functions used to analyze the input . so th he uses analysis functions called gabor functions , which have a certain extent , in time and in frequency . and the idea is these are used to sample , the signal in a represented as a time - frequency representation . so you 're sampling some piece of this time - frequency plane . and , that , is interesting , cuz , @ for one thing , you could use it , in a multi - scale way . you could have these instead of having everything like we use a twenty - five millisecond or so analysis window , typically , and that 's our time scale for features , but you could using this , basis function idea , you could have some basis functions which have a lot longer time scale and , some which have a lot shorter , and so it would be like a set of multi - scale features . so he 's interested in , th - this is because it 's , there are these different parameters for the shape of these basis functions , there are a lot of different possible basis functions . and so he actually does an optimization procedure to choose an optimal set of basis functions out of all the possible ones . grad c: the method he uses is funny is , he starts with he has a set of m of them . , he and then he uses that to classify , he t he tries , using just m minus one of them . so there are m possible subsets of this length - m vector . he tries classifying , using each of the m possible sub - vectors . whichever sub - vector , works the best , i , he says the fe feature that did n't use was the most useless feature , grad c: so we 'll throw it out and we 're gon na randomly select another feature from the set of possible basis functions . professor b: but it 's but it 's , it 's there 's a lot number of things i like about it , let me just say . professor b: so , first thing , you 're right . , i { nonvocalsound } in truth , both pieces of this are have their analogies in we already do . but it 's a different take at how to approach it and potentially one that 's m maybe a bit more systematic than what we ' ve done , and a b a bit more inspiration from auditory things . so it 's so it 's a neat thing to try . the primary features , are , essentially , it 's , , plp or mel cepstrum , like that . you ' ve you ' ve got some , compression . we always have some , the filter bank with a quasi - log scaling . , if you put in if you also include the rasta in it i rasta the filtering being done in the log domain has an agc - like , characteristic , which , people typi typically put in these , , auditory front - ends . so it 's very , very similar , but it 's not exactly the same . i would agree that the second one is somewhat more different but , it 's mainly different in that the things that we have been doing like that have been , had a different motivation and have ended up with different kinds of constraints . so , if you look at the lda rasta , what they do is they look at the different eigenvectors out of the lda and they form filters out of it . right ? and those filters have different , kinds of temporal extents and temporal characteristics . and so they 're multi - scale . but , they 're not systematically multi - scale , like " let 's start here and go to there , and go to there " , and . it 's more like , you run it on this , you do discriminant analysis , and you find out what 's helpful . professor b: , you do n't have to but , hynek has . but it 's also , hyn - when hynek 's had people do this lda analysis , they ' ve done it on frequency direction and they ' ve done it on the time direction . he may have had people sometimes doing it on both simultaneously some two - d and that would be the closest to these gabor function things . , but i do n't think they ' ve done that much of that . and , the other thing that 's interesting the , the feature selection thing , it 's a simple method , but i kinda like it . , there 's a old , old method for feature selection . , , i remember people referring to it as old when i was playing with it twenty years ago , so i know it 's pretty old , called stepwise linear discriminant analysis in which you which it 's used in social sciences a lot . so , you pick the best feature . and then you take y you find the next feature that 's the best in combination with it . and then so on and so on . and what michael 's describing seems to me much , much better , because the problem with the stepwise discriminant analysis is that you that , if you ' ve picked the right set of features . just because something 's a good feature does n't mean that you should be adding it . so , , here at least you 're starting off with all of them , and you 're throwing out useless features . that 's that seems , that seems like a lot better idea . , you 're always looking at things in combination with other features . so the only thing is , there 's this artificial question of , exactly how you a how you assess it and if your order had been different in throwing them out . , it still is n't necessarily really optimal , but it seems like a pretty good heuristic . so i th it 's kinda neat . and and , the thing that i wanted to add to it also was to have us use this in a multi - stream way . so that , when you come up with these different things , and these different functions , you do n't necessarily just put them all into one huge vector , but perhaps you have some of them in one stream and some of them in another stream , and . and we ' ve also talked a little bit about , , shihab shamma 's , in which you the way you look at it is that there 's these different mappings and some of them emphasize , upward moving , energy and fre and frequency . and some are emphasizing downward and fast things and slow things and . so there 's a bunch of to look at . but , we 're sorta gon na start off with what he , came here with and branch out from there . and his advisor is here , too , at the same time . he 'll be another interesting source of wisdom . grad e: as as we were talking about this i was thinking , whether there 's a relationship between , between michael 's approach to , some optimal brain damage or optimal brain surgeon on the neural nets . so , like , if we have , we have our rasta features and presumably the neural nets are learning some a nonlinear mapping , from the features to this probability posterior space . right ? and , and each of the hidden units is learning some pattern . and it could be , like these , these auditory patterns that michael is looking at . and then when you 're looking at the , , the best features , you can take out you can do the do this , brain surgery by taking out , hidden units that do n't really help . professor b: , y actually , you make me think a very important point here is that , if we a again try to look at how is this different from what we 're already doing , there 's a , a nasty argument that could be made th that it 's not different , because , if you ignore the selection part because we are going into a very powerful , nonlinearity that , is combining over time and frequency , and is coming up with its own , better than gabor functions its , neural net functions , its whatever it finds to be best . , so you could argue that it but i do n't actually believe that argument because i know that , you can , computing features is useful , even though in principle you have n't added anything , you subtracted something , from the original waveform , if you ' ve processed it in some way you ' ve typically lost something some information . and so , you ' ve lost information and yet it does better with features than it does with the waveform . , i know that i sometimes it 's useful to constrain things . so that 's why it really seems like the constraint in all this it 's the constraints that are actually what matters . because if it was n't the constraints that mattered , then we would ' ve completely solved this problem long ago , because long ago we already knew how to put waveforms into powerful statistical mechanisms . phd d: . , if we had infinite processing power and data , i , using the waveform could professor b: . then it would work . but but , i it 's with finite of those things , we have done experiments where we literally have put waveforms in and , we kept the number of parameters the same and , and it used a lot of training data . and it and it , not infinite but a lot , and then compared to the number parameters and it , it just does n't do nearly as . so , anyway that you want to suppress it 's not just having the maximum information , you want to suppress , the aspects of the input signal that are not helpful for the discrimination you 're trying to make . so maybe just briefly , grad e: , that segues into what i ' m doing . , so , the big picture is k , come up with a set of , intermediate categories , then build intermediate category classifiers , then do recognition , and , improve speech recognition in that way . , so right now i ' m in the phase where i ' m looking at , deciding on a initial set of intermediate categories . and i ' m looking for data - driven methods that can help me find , a set of intermediate categories of speech that , will help me to discriminate later down the line . and one of the ideas , that was to take a neural net train an ordinary neural net to , to learn the posterior probabilities of phones . and so , at the end of the day you have this neural net and it has hidden units . and each of these hidden units is , is learning some pattern . and so , what are these patterns ? i . , and i ' m gon na to try to look at those patterns to see , from those patterns , presumably those are important patterns for discriminating between phone classes . and maybe some , intermediate categories can come from just looking at the patterns of , that the neural net learns . professor b: be - before you get on the next part l let me just point out that s there 's a pretty relationship between what you 're talking about doing and what you 're talking about doing there . right ? so , it seems to me that , if you take away the difference of this primary features , and , say , you use as we had talked about maybe doing you use p - rasta - plp for the primary features , then this feature discovery , thing is just what he 's talking about doing , too , except that he 's talking about doing them in order to discover intermediate categories that correspond to these , what these sub - features are showing you . and , the other difference is that , he 's doing this in a multi - band setting , which means that he 's constraining himself to look across time in some f relatively limited , spectral extent . right ? and whereas in this case you 're saying " let 's just do it unconstrained " . so they 're really pretty related and maybe they 'll be at some point where we 'll see the connections a little better and connect them . grad e: , so that 's the first part , one of the ideas to get at some patterns of intermediate categories . , the other one was , to , come up with a model , a graphical model , that treats the intermediate categories as hidden variables , latent variables , that we anything about , but that through , s statistical training and the em algorithm , at the end of the day , we have learned something about these latent , latent variables which happen to correspond to intermediate categories . { nonvocalsound } , and so those are the two directions that i ' m looking into right now . and , . i that 's it . professor b: . it 's like , the little rats with the little thing dropping down to them .
icsi's meeting recorder group met to discuss their progress in various aspects of the aurora project , but also to hear more about other developments relevant to the group. on the aurora project , there were reports on a project conference call , the status of the tandem neural networks , and progress with the mississippi state recognizer. the latency limit has been set , and the group's system is performing very well , but is a little over. on the larger vocabulary task , there are still a few issues to resolve before work can really get started. the group heard of the plan of one of it's member's work into intermediate classifiers , and also of how a visiting research student's work into auditory models can be applied to their work. speaker me013 wants to know how much memory the tandem network takes up. it is only a minor problem that the latency limit has been set below the current systems level , and also keeping the number of features within limits only drops performance a little. a more significant problem is that the tandem approach may not fit in the memory space allowed , and removing it drops performance more. some of the group had issues with mn007's approach to human performance testing , but this was considered more of a side issue. speaker mn007 has been working on the tandem network approach , and the current results are good. he has found a good way of calculating the silence probabilities , that does not increase insertions. he also attempted to transcribe data himself , to establish a human performance level. speaker me018 is working on mississippi sate recognizer for dealing with the wall street journal data.
###dialogue: professor b: so ne next week we 'll have , both birger and , mike michael kleinschmidt and birger kollmeier will join us . professor b: , and you 're probably gon na go up in a couple three weeks or so ? when d when are you thinking of going up to , ogi ? professor b: ok . good . so at least we 'll have one meeting with yo with you still around , and that 's good . phd d: . so there was this conference call this morning , and the only topic on the agenda was just to discuss a and to come at , to get a decision about this latency problem . professor b: no , this i ' m , this is a conference call between different aurora people or just ? phd d: , . there were like two hours of discussions , and then suddenly , people were tired , i , and they decided on { nonvocalsound } a number , two hundred and twenty , included e including everything . , it means that it 's like eighty milliseconds less than before . phd d: so , currently d , we have system that has two hundred and thirty . so , that 's fine . phd d: so that 's the system that 's described on the second point of this document . professor b: w it 's it 's p d primary primarily determined by the vad at this point , right ? s so we can make the vad a little shorter . professor b: . we probably should do that pretty soon so that we do n't get used to it being a certain way . was hari on the phone ? phd d: , . , it was mainly a discussion between hari and david , who was like mmm , . so , the second thing is the system that we have currently . , yes . we have , like , a system that gives sixty - two percent improvement , but if you want to stick to the this latency , it has a latency of two thirty , but if you want also to stick to the number of features that limit it to sixty , then we go a little bit down but it 's still sixty - one percent . , and if we drop the tandem network , then we have fifty - seven percent . professor b: , but th the two th two thirty includes the tandem network ? and i is the tandem network , small enough that it will fit on the terminal size in terms of ? phd d: no . it 's still in terms of computation , if we use , like , their way of computing the maps the mips , it fits , phd d: and i how much this can be discussed or not , because it 's it could be in rom , so it 's maybe not that expensive . but phd d: i d , i do n't kn remember exactly , but . , i c i have to check that . professor b: . i 'd like to see that , cuz maybe i could think a little bit about it , cuz we maybe we could make it a little smaller or , it 'd be neat if we could fit it all . , i 'd like to see how far off we are . but i it 's still within their rules to have it on the , t , server side . right ? professor b: and this is still ? , y you 're saying here . i c i should just let you go on . phd d: , there were small tricks to make this tandem network work . , mmm , and one of the trick was to , use some hierarchical structure where the silence probability is not computed by the final tandem network but by the vad network . so it looks better when , we use the silence probability from the vad network and we re - scale the other probabilities by one minus the silence probability . so it 's some hierarchical thing , that sunil also tried , on spine and it helps a little bit also . and . , the reason w why we did that with the silence probability was that , professor b: could ? , i ' m really . can you repeat what you were saying about the silence probability ? i only my mind was some phd d: so there is the tandem network that e estimates the phone probabilities and the silence probabilities also . and things get better when , instead of using the silence probability computed by the tandem network , we use the silence probability , given by the vad network , phd d: which is smaller , but maybe , so we have a network for the vad which has one hundred hidden units , and the tandem network has five hundred . so it 's smaller but th the silence probability from this network seems , better . , it looks strange , but phd d: but it maybe it 's has something to do to the fact that we do n't have infinite training data and grad e: are you were going to say why what made you wh what led you to do that . phd d: . , there was a p problem that we observed , that there was there were , like , many insertions in the system . actually plugging in the tandem network was increasing , i , the number of insertions . and , so it looked strange and then just using the other silence probability helps . the next thing we will do is train this tandem on more data . professor b: so , in a way what it might i it 's a little bit like combining knowledge sources . because the fact that you have these two nets that are different sizes means they behave a little differently , they find different things . and , if you have , f the distribution that you have from , f speech sounds is w one source of knowledge . and this is and rather than just taking one minus that to get the other , which is essentially what 's happening , you have this other source of knowledge that you 're putting in there . so you make use of both of them in what you 're ending up with . maybe it 's better . anyway , you can probably justify anything if what 's use phd d: and and the features are different also . , the vad does n't use the same features there are . professor b: ! that might be the key , actually . cuz you were really thinking about speech versus nonspeech for that . that 's a good point . phd d: . , there are other things that we should do but , it requires time and we have ideas , like so , these things are like hav having a better vad . , we have some ideas about that . it would probably implies working a little bit on features that are more suited to a voice activity detection . working on the second stream . we have ideas on this also , but w we need to try different things , but their noise estimation , professor b: , back on the second stream , that 's something we ' ve talked about for a while . , { nonvocalsound } that 's certainly a high hope . professor b: so we have this default idea about just using some purely spectral thing ? for a second stream ? phd d: . it was c it was just combined , by the acoustic model . so there was , no neural network for the moment . professor b: right . so , if you just had a second stream that was just spectral and had another neural net and combined there , that , might be good . phd d: . - . , and the other thing , that noise estimation and th , maybe try to train , the training data for the t tandem network , right now , is like i is using the noises from the aurora task and that people might , try to argue about that because then in some cases we have the same noises in for training the network than the noises that are used for testing , so we have t n , to try to get rid of these this problem . professor b: , it 's probably helpful to have a little noise there . but it may be something else th at least you could say it was . and then if it does n't hurt too much , though . that 's a good idea . phd d: . the last thing is that we are getting close to human performance . , that 's something i would like to investigate further , but , i did , like , i did , listen to the m most noisy utterances of the speechdat - car italian and tried to transcribe them . phd d: that 's the flaw of the experiment . this is just i j it 's just one subject , phd d: but still , what happens is that , the digit error rate on this is around one percent , while our system is currently at seven percent . , but what happens also is that if i listen to the , { nonvocalsound } a re - synthesized version of the speech and i re - synthesized this using a white noise that 's filtered by a lpc , filter , you can argue , that this is not speech , so the ear is not trained to recognize this . but s actually it sound like whispering , so we are professor b: , it 's there 's two problems there . i mean , so the first is that by doing lpc - twelve with synthesized speech w like you 're saying , it 's i you 're adding other degradation . so it 's not just the noise but you 're adding some degradation because it 's only an approximation . and the second thing is which is m maybe more interesting is that , if you do it with whispered speech , you get this number . what if you had done analysis re - synthesis and taken the pitch as ? alright ? so now you put the pitch in . what would the percentage be then ? see , that 's the question . so , you see , if it 's , let 's say it 's back down to one percent again . that would say at least for people , having the pitch is really , really important , which would be interesting in itself . professor b: if i on the other hand , if it stayed up near five percent , then i 'd say " boy , lpc n twelve is pretty crummy " . ? so i ' m not how we can conclude from this anything about that our system is close to the human performance . phd d: ye . , that l ey that , what i listened to when i re - synthesized the lp - the lpc - twelve spectrum is in a way what the system , is hearing , cuz @ all the , excitation all the , the excitation is not taken into account . that 's what we do with our system . professor b: twenty . , th lpc is not a really great representation of speech . so , all i ' m saying is that you have in addition to the w the , removal of pitch , you also are doing , a particular parameterization , which , so , let 's see , how would you do ? so , fo professor b: no . actually , we d we do n't , because we do , , mel filter bank , . phd a: could n't you t could n't you , test the human performance on just the original audio ? phd a: ok . so , y , your performance was one percent , and then when you re - synthesize with lpc - twelve it went to five . ok . professor b: we were we were j it it 's a little bit still apples and oranges because we are choosing these features in order to be the best for recognition . i if you listen to them they still might not be very even if you made something closer to what we 're gon na i it might not sound very good . , and i the degradation from that might actually make it even harder , to understand than the lpc - twelve . so all i ' m saying is that the lpc - twelve puts in synthesis puts in some degradation that 's not what we 're used to hearing , and is , it 's not it 's not just a question of how much information is there , as if you will always take maximum advantage of any information that 's presented to you . , you hear some things better than others . and so it is n't professor b: but , i agree that it says that , the information that we 're feeding it is probably , a little bit , minimal . there 's definitely some things that we ' ve thrown away . and that 's why i was saying it might be interesting if you an interesting test of this would be if you actually put the pitch back in . so , you just extract it from the actual speech and put it back in , and see does that is that does that make the difference ? if that if that takes it down to one percent again , then you 'd say " ok , it 's having , not just the spectral envelope but also the pitch that , @ has the information that people can use , anyway . " phd a: but from this it 's pretty safe to say that the system is with either two to seven percent away from the performance of a human . right ? so it 's somewhere in that range . professor b: , so it 's it 's one point four times , to , seven times the error , professor b: for stephane . so , but i i . i do do n't wanna take you away from other things . professor b: but that 's what that 's the first thing that i would be curious about , is , i when you we phd d: but the signal itself is like a mix of , of a periodic sound and , @ , unvoiced sound , and the noise which is mostly , noise . not periodic . so , what do you mean exactly by putting back the pitch in ? because professor b: . you did lpc re - synthesis l pc re - synthesis . so , and you did it with a noise source , rather than with a s periodic source . so if you actually did real re - synthesis like you do in an lpc synthesizer , where it 's unvoiced you use noise , where it 's voiced you use , periodic pulses . phd d: , but it 's neither purely voiced or purely unvoiced . esp - especially because there is noise . professor b: but it but that if you , if you detect that there 's periodic s strong periodic components , then you can use a voiced voice thing . , it 's probably not worth your time . it 's it 's a side thing and there 's a lot to do . professor b: but i ' m just saying , at least as a thought experiment , that 's what i would wanna test . , i wan would wanna drive it with a two - source system rather than a one - source system . and then that would tell you whether it 's cuz we ' ve talked about , like , this harmonic tunneling or other things that people have done based on pitch , maybe that 's really a key element . maybe maybe , , without that , it 's not possible to do a whole lot better than we 're doing . that that could be . phd d: that 's what i was thinking by doing this es experiment , like mmm . evi professor b: but , other than that , i do n't think it 's , other than the pitch de information , it 's hard to imagine that there 's a whole lot more in the signal that , that we 're throwing away that 's important . professor b: right ? , we 're using a fair number of filters in the filter bank . that look professor b: that 's that 's , one percent is what i would figure . if somebody was paying really close attention , you might get i would actually think that if , you looked at people on various times of the day and different amounts of attention , you might actually get up to three or four percent error on digits . , we 're not incredibly far off . on the other hand , with any of these numbers except maybe the one percent , it 's st it 's not actually usable in a commercial system with a full telephone number . professor b: good . , while we 're still on aurora maybe you can talk a little about the status with the , wall street journal things for it . phd a: so i ' ve , downloaded , a couple of things from mississippi state . , one is their software their , lvcsr system . downloaded the latest version of that . got it compiled and everything . , downloaded the scripts . they wrote some scripts that make it easy to run the system on the wall street journal , data . , so i have n't run the scripts yet . , i ' m waiting there was one problem with part of it and i wrote a note to joe asking him about it . so i ' m waiting to hear from him . but , i did print something out just to give you an idea about where the system is . , they on their web site they , did this little table of where their system performs relative to other systems that have done this task . and , the mississippi state system using a bigram grammar , is at about eight point two percent . other comparable systems from , were getting from , like six point nine , six point eight percent . so they 're phd a: this is on clean . they they ' ve started a table where they 're showing their results on various different noise conditions but they do n't have a whole lot of it filled in and i did n't notice until after i 'd printed it out that , they do n't say here what these different testing conditions are . you actually have to click on it on the web site to see them . so i what those numbers really mean . phd a: , see , i was a little confused because on this table , i ' m the they 're showing word error rate . but on this one , i if these are word error rates because they 're really big . so , under condition one here it 's ten percent . then under three it goes to sixty - four point six percent . phd a: so m i maybe they 're error rates but they 're , they 're really high . professor b: , we w what 's some of the lower error rates on , some of the higher error rates on , some of these w , highly mismatched difficult conditions ? what 's a ? professor b: and if you 're saying sixty - thousand word recognition , getting sixty percent error on some of these noise condition not surprising . phd a: so , that 's probably what it is then . so they have a lot of different conditions that they 're gon na be filling out . professor b: it 's a bad sign when you looking at the numbers , you ca n't tell whether it 's accuracy or error rate . phd a: . it 's it 's gon na be hard . , they 're i ' m still waiting for them to release the , multi - cpu version of their scripts , cuz right now their script only handles processing on a single cpu , which will take a really long time to run . so . but their s phd a: i beli yes , for the training also . and , they 're supposed to be coming out with it any time , the multi - cpu one . so , as soon as they get that , then i 'll grab those too and so w phd a: . i 'll go ahead and try to run it though with just the single cpu one , phd a: and i they , released like a smaller data set that you can use that only takes like sixteen hours to train and . so i can run it on that just to make that the thing works and everything . professor b: ! good . cuz we 'll i the actual evaluation will be in six weeks . so . is that about right you think ? phd a: it was n't on the conference call this morning ? did they say anything on the conference call about , how the wall street journal part of the test was going to be run ? because i remembered hearing that some sites were saying that they did n't have the compute to be able to run the wall street journal at their place , so there was some talk about having mississippi state run the systems for them . and i did did that come up ? phd d: , no . , this first , this was not the point of this the meeting today phd d: , frankly , i because i d did n't read also the most recent mails about the large - vocabulary task . but , did you do you still , get the mails ? you 're not on the mailing list or what ? professor b: i have to say , there 's something funny - sounding about saying that one of these big companies does n't have enough cup compute power do that , so they 're having to have it done by mississippi state . it just sounds funny . phd a: because there 's this whole issue about , simple tuning parameters , like word insertion penalties . and whether or not those are going to be tuned or not , and so . , it makes a big difference . if you change your front - end , the scale is completely can be completely different , so . it seems reasonable that at least should be tweaked to match the front - end . phd a: i did , but joe said , " what you 're saying makes sense and i " . so he does n't the answer is . , that 's th we had this back and forth a little bit about , are sites gon na are you gon na run this data for different sites ? and , if mississippi state runs it , then maybe they 'll do a little optimization on that parameter , and , but then he was n't asked to run it for anybody . so i it 's just not clear yet what 's gon na happen . , he 's been putting this out on their web site and for people to grab but i have n't heard too much about what 's happening . professor b: so it could be , chuck and i had actually talked about this a couple times , and over some lunches , that , one thing that we might wanna do the - there 's this question about , what do you wanna scale ? suppose y you ca n't adjust these word insertion penalties and , so you have to do everything at the level of the features . what could you do ? and , one thing i had suggested at an earlier time was maybe some scaling , some root of the , , features . but the problem with that is that is n't quite the same , it occurred to me later , because what you really want to do is scale the , @ the range of the likelihoods rather than professor b: but , what might get at something similar , it just occurred to me , is an intermediate thing is because we do this strange thing that we do with the tandem system , at least in that system what you could do is take the , , values that come out of the net , which are something like log probabilities , and scale those . and then , then at least those things would have the right values or the right range . and then that goes into the rest of it and then that 's used as observations . so it 's , another way to do it . professor b: but but , so because what we 're doing is pretty strange and complicated , we do n't really the effect is at the other end . so , my thought was maybe , they 're not used as probabilities , but the log probabilities we 're taking advantage of the fact that something like log probabilities has more of a gaussian shape than gaus - than probabilities , and so we can model them better . so , in a way we 're taking advantage of the fact that they 're probabilities , because they 're this quantity that looks gaussian when you take it 's log . so , maybe it would have a reasonable effect to do that . i d i . but , i we still have n't had a ruling back on this . and we may end up being in a situation where we just really ca n't change the word insertion penalty . but the other thing we could do is also we could , this may not help us , in the evaluation but it might help us in our understanding at least . we might , just run it with different insper insertion penalties , and show that , " , ok , not changing it , playing the rules the way you wanted , we did this . but if we did that , it made a big difference . " phd a: i wonder if it might be possible to , simulate the back - end with some other system . so we get our f front - end features , and then , as part of the process of figuring out the scaling of these features , if we 're gon na take it to a root or to a power , we have some back - end that we attach onto our features that simulates what would be happening . phd a: and just adjust it until that our l version of the back - end , decides that professor b: , we can probably use the real thing , ca n't we ? and then jus just , use it on a reduced test set . phd a: . , . that 's true . and then we just use that to determine some scaling factor that we use . professor b: . so , that 's a reasonable thing to do and the only question is what 's the actual knob that we use ? and the knob that we use should , unfortunately , like i say , i the analytic solution to this cuz what we really want to do is change the scale of the likelihoods , not the cha not the scale of the observations . phd a: it 's the same system that they use when they participate in the hub - five evals . it 's a , came out of , looking a lot like htk . , they started off with , when they were building their system they were always comparing to htk to make they were getting similar results . and so , it 's a gaussian mixture system , professor b: do they have the same mix - down procedure , where they start off with a small number of some things professor b: d do what tying they use ? are they some a bunch of gaussians that they share across everything ? or or if it 's ? phd a: , th i have i do n't have it up here but i have a the whole system description , that describes exactly what their system is and i ' m not . but , it 's some a mixture of gaussians and , clustering they 're they 're trying to put in all of the standard features that people use nowadays . professor b: so the other , aurora thing maybe is if any of this is gon na come in time to be relevant , but , we had talked about , guenter playing around , over in germany and , @ , possibly coming up with something that would , , fit in later . , i saw that other mail where he said that he , it was n't going to work for him to do cvs . professor b: so he just has it all sitting there . so if he 'll he might work on improving the noise estimate or on some histogram things , or . saw the eurospeech we we did n't talk about it at our meeting but saw the just read the paper . someone , i forget the name , and ney , about histogram equalization ? did you see that one ? phd d: it was something similar to n on - line normalization finally , in the idea of normalizing professor b: . but it 's a little more it 's a little finer , so they had like ten quantiles and they adjust the distribution . professor b: and then , so this is just a histogram of the amplitudes , i . and then , people do this in image processing some . you have this histogram of levels of brightness or whatever . and and then , when you get a new thing that you want to adjust to be better in some way , you adjust it so that the histogram of the new data looks like the old data . you do this piece - wise linear or , some piece - wise approximation . they did a one version that was piece - wise linear and another that had a power law thing between them between the points . they said they s they see it in a way as s for the speech case as being a generalization of spectral subtraction in a way , because , in spectral subtraction you 're trying to get rid of this excess energy . , it 's not supposed to be there . and , this is adjusting it for a lot of different levels . and then they have s they have some , a floor , so if it gets too low you do n't do it . and they claimed very results , phd a: and and that , histogram represents the different energy levels that have been seen at that frequency ? professor b: i do n't remember that . and how often they you ' ve seen them . . and they do they said that they could do it for the test so you do n't have to change the training . you just do a measurement over the training . and then , for testing , you can do it for one per utterance . even relatively short utterances . and they claim it works pretty . phd a: so they , is the idea that you run a test utterance through some histogram generation thing and then you compare the histograms and that tells you what to do to the utterance to make it more like ? professor b: i in pri in principle . i did n't read carefully how they actually implemented it , whether it was some , on - line thing , or whether it was a second pass , or what . but but they that that was the idea . so that seemed , different . we 're curious about , what are some things that are , u , @ conceptually quite different from what we ' ve done . cuz we , one thing that w that , stephane and sunil seemed to find , was , they could actually make a unified piece of software that handled a range of different things that people were talking about , and it was really just setting of different constants . and it would turn , one thing into another . it 'd turn wiener filtering into spectral subtraction , or whatever . but there 's other things that we 're not doing . so , we 're not making any use of pitch , which again , might be important , because the between the harmonics is probably a schmutz . and and the , transcribers will have fun with that . the , at the harmonics is n't so much . and and , and we there 's this overall idea of really matching the hi distributions somehow . , not just subtracting off your estimate of the noise . so i , guenter 's gon na play around with some of these things now over this next period , or ? professor b: , he 's got it anyway , so he can . so potentially if he came up with something that was useful , like a diff a better noise estimation module , he could ship it to you guys u up there we could put it in . professor b: that 's good . so , why do n't we just , starting a w couple weeks from now , especially if you 're not gon na be around for a while , we 'll be shifting more over to some other territory . but , , n not so much in this meeting about aurora , but , , maybe just , quickly today about maybe you could just say a little bit about what you ' ve been talking about with michael . and and then barry can say something about what we 're talking about . grad c: ok . so michael kleinschmidt , who 's a phd student from germany , showed up this week . he 'll be here for about six months . and he 's done some work using an auditory model of , human hearing , and using that f , to generate speech recognition features . and he did work back in germany with , a toy recognition system using , isolated digit recognition as the task . it was actually just a single - layer neural network that classified words classified digits , . , and he tried that on some aurora data and got results that he thought seemed respectable . and he w he 's coming here to u use it on a , a real speech recognition system . so i 'll be working with him on that . and , maybe i should say a little more about these features , although i do n't understand them that . the it 's a two - stage idea . and , the first stage of these features correspond to what 's called the peripheral auditory system . and i that is like a filter bank with a compressive nonlinearity . and i ' m - i ' m not what we have @ in there that is n't already modeled in something like , plp . i should learn more about that . and then the second stage is , the most different thing , from what we usually do . it 's , it computes features which are , based on like based on diffe different w , wavelet basis functions used to analyze the input . so th he uses analysis functions called gabor functions , which have a certain extent , in time and in frequency . and the idea is these are used to sample , the signal in a represented as a time - frequency representation . so you 're sampling some piece of this time - frequency plane . and , that , is interesting , cuz , @ for one thing , you could use it , in a multi - scale way . you could have these instead of having everything like we use a twenty - five millisecond or so analysis window , typically , and that 's our time scale for features , but you could using this , basis function idea , you could have some basis functions which have a lot longer time scale and , some which have a lot shorter , and so it would be like a set of multi - scale features . so he 's interested in , th - this is because it 's , there are these different parameters for the shape of these basis functions , there are a lot of different possible basis functions . and so he actually does an optimization procedure to choose an optimal set of basis functions out of all the possible ones . grad c: the method he uses is funny is , he starts with he has a set of m of them . , he and then he uses that to classify , he t he tries , using just m minus one of them . so there are m possible subsets of this length - m vector . he tries classifying , using each of the m possible sub - vectors . whichever sub - vector , works the best , i , he says the fe feature that did n't use was the most useless feature , grad c: so we 'll throw it out and we 're gon na randomly select another feature from the set of possible basis functions . professor b: but it 's but it 's , it 's there 's a lot number of things i like about it , let me just say . professor b: so , first thing , you 're right . , i { nonvocalsound } in truth , both pieces of this are have their analogies in we already do . but it 's a different take at how to approach it and potentially one that 's m maybe a bit more systematic than what we ' ve done , and a b a bit more inspiration from auditory things . so it 's so it 's a neat thing to try . the primary features , are , essentially , it 's , , plp or mel cepstrum , like that . you ' ve you ' ve got some , compression . we always have some , the filter bank with a quasi - log scaling . , if you put in if you also include the rasta in it i rasta the filtering being done in the log domain has an agc - like , characteristic , which , people typi typically put in these , , auditory front - ends . so it 's very , very similar , but it 's not exactly the same . i would agree that the second one is somewhat more different but , it 's mainly different in that the things that we have been doing like that have been , had a different motivation and have ended up with different kinds of constraints . so , if you look at the lda rasta , what they do is they look at the different eigenvectors out of the lda and they form filters out of it . right ? and those filters have different , kinds of temporal extents and temporal characteristics . and so they 're multi - scale . but , they 're not systematically multi - scale , like " let 's start here and go to there , and go to there " , and . it 's more like , you run it on this , you do discriminant analysis , and you find out what 's helpful . professor b: , you do n't have to but , hynek has . but it 's also , hyn - when hynek 's had people do this lda analysis , they ' ve done it on frequency direction and they ' ve done it on the time direction . he may have had people sometimes doing it on both simultaneously some two - d and that would be the closest to these gabor function things . , but i do n't think they ' ve done that much of that . and , the other thing that 's interesting the , the feature selection thing , it 's a simple method , but i kinda like it . , there 's a old , old method for feature selection . , , i remember people referring to it as old when i was playing with it twenty years ago , so i know it 's pretty old , called stepwise linear discriminant analysis in which you which it 's used in social sciences a lot . so , you pick the best feature . and then you take y you find the next feature that 's the best in combination with it . and then so on and so on . and what michael 's describing seems to me much , much better , because the problem with the stepwise discriminant analysis is that you that , if you ' ve picked the right set of features . just because something 's a good feature does n't mean that you should be adding it . so , , here at least you 're starting off with all of them , and you 're throwing out useless features . that 's that seems , that seems like a lot better idea . , you 're always looking at things in combination with other features . so the only thing is , there 's this artificial question of , exactly how you a how you assess it and if your order had been different in throwing them out . , it still is n't necessarily really optimal , but it seems like a pretty good heuristic . so i th it 's kinda neat . and and , the thing that i wanted to add to it also was to have us use this in a multi - stream way . so that , when you come up with these different things , and these different functions , you do n't necessarily just put them all into one huge vector , but perhaps you have some of them in one stream and some of them in another stream , and . and we ' ve also talked a little bit about , , shihab shamma 's , in which you the way you look at it is that there 's these different mappings and some of them emphasize , upward moving , energy and fre and frequency . and some are emphasizing downward and fast things and slow things and . so there 's a bunch of to look at . but , we 're sorta gon na start off with what he , came here with and branch out from there . and his advisor is here , too , at the same time . he 'll be another interesting source of wisdom . grad e: as as we were talking about this i was thinking , whether there 's a relationship between , between michael 's approach to , some optimal brain damage or optimal brain surgeon on the neural nets . so , like , if we have , we have our rasta features and presumably the neural nets are learning some a nonlinear mapping , from the features to this probability posterior space . right ? and , and each of the hidden units is learning some pattern . and it could be , like these , these auditory patterns that michael is looking at . and then when you 're looking at the , , the best features , you can take out you can do the do this , brain surgery by taking out , hidden units that do n't really help . professor b: , y actually , you make me think a very important point here is that , if we a again try to look at how is this different from what we 're already doing , there 's a , a nasty argument that could be made th that it 's not different , because , if you ignore the selection part because we are going into a very powerful , nonlinearity that , is combining over time and frequency , and is coming up with its own , better than gabor functions its , neural net functions , its whatever it finds to be best . , so you could argue that it but i do n't actually believe that argument because i know that , you can , computing features is useful , even though in principle you have n't added anything , you subtracted something , from the original waveform , if you ' ve processed it in some way you ' ve typically lost something some information . and so , you ' ve lost information and yet it does better with features than it does with the waveform . , i know that i sometimes it 's useful to constrain things . so that 's why it really seems like the constraint in all this it 's the constraints that are actually what matters . because if it was n't the constraints that mattered , then we would ' ve completely solved this problem long ago , because long ago we already knew how to put waveforms into powerful statistical mechanisms . phd d: . , if we had infinite processing power and data , i , using the waveform could professor b: . then it would work . but but , i it 's with finite of those things , we have done experiments where we literally have put waveforms in and , we kept the number of parameters the same and , and it used a lot of training data . and it and it , not infinite but a lot , and then compared to the number parameters and it , it just does n't do nearly as . so , anyway that you want to suppress it 's not just having the maximum information , you want to suppress , the aspects of the input signal that are not helpful for the discrimination you 're trying to make . so maybe just briefly , grad e: , that segues into what i ' m doing . , so , the big picture is k , come up with a set of , intermediate categories , then build intermediate category classifiers , then do recognition , and , improve speech recognition in that way . , so right now i ' m in the phase where i ' m looking at , deciding on a initial set of intermediate categories . and i ' m looking for data - driven methods that can help me find , a set of intermediate categories of speech that , will help me to discriminate later down the line . and one of the ideas , that was to take a neural net train an ordinary neural net to , to learn the posterior probabilities of phones . and so , at the end of the day you have this neural net and it has hidden units . and each of these hidden units is , is learning some pattern . and so , what are these patterns ? i . , and i ' m gon na to try to look at those patterns to see , from those patterns , presumably those are important patterns for discriminating between phone classes . and maybe some , intermediate categories can come from just looking at the patterns of , that the neural net learns . professor b: be - before you get on the next part l let me just point out that s there 's a pretty relationship between what you 're talking about doing and what you 're talking about doing there . right ? so , it seems to me that , if you take away the difference of this primary features , and , say , you use as we had talked about maybe doing you use p - rasta - plp for the primary features , then this feature discovery , thing is just what he 's talking about doing , too , except that he 's talking about doing them in order to discover intermediate categories that correspond to these , what these sub - features are showing you . and , the other difference is that , he 's doing this in a multi - band setting , which means that he 's constraining himself to look across time in some f relatively limited , spectral extent . right ? and whereas in this case you 're saying " let 's just do it unconstrained " . so they 're really pretty related and maybe they 'll be at some point where we 'll see the connections a little better and connect them . grad e: , so that 's the first part , one of the ideas to get at some patterns of intermediate categories . , the other one was , to , come up with a model , a graphical model , that treats the intermediate categories as hidden variables , latent variables , that we anything about , but that through , s statistical training and the em algorithm , at the end of the day , we have learned something about these latent , latent variables which happen to correspond to intermediate categories . { nonvocalsound } , and so those are the two directions that i ' m looking into right now . and , . i that 's it . professor b: . it 's like , the little rats with the little thing dropping down to them . ###summary: icsi's meeting recorder group met to discuss their progress in various aspects of the aurora project , but also to hear more about other developments relevant to the group. on the aurora project , there were reports on a project conference call , the status of the tandem neural networks , and progress with the mississippi state recognizer. the latency limit has been set , and the group's system is performing very well , but is a little over. on the larger vocabulary task , there are still a few issues to resolve before work can really get started. the group heard of the plan of one of it's member's work into intermediate classifiers , and also of how a visiting research student's work into auditory models can be applied to their work. speaker me013 wants to know how much memory the tandem network takes up. it is only a minor problem that the latency limit has been set below the current systems level , and also keeping the number of features within limits only drops performance a little. a more significant problem is that the tandem approach may not fit in the memory space allowed , and removing it drops performance more. some of the group had issues with mn007's approach to human performance testing , but this was considered more of a side issue. speaker mn007 has been working on the tandem network approach , and the current results are good. he has found a good way of calculating the silence probabilities , that does not increase insertions. he also attempted to transcribe data himself , to establish a human performance level. speaker me018 is working on mississippi sate recognizer for dealing with the wall street journal data.
8